I0316 13:03:28.867790 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0316 13:03:28.867957 7 e2e.go:124] Starting e2e run "47f29d42-c6ff-4dc9-a320-1ad8ab3df580" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1584363807 - Will randomize all specs Will run 275 of 4992 specs Mar 16 13:03:28.923: INFO: >>> kubeConfig: /root/.kube/config Mar 16 13:03:28.928: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 16 13:03:28.963: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 16 13:03:28.995: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 16 13:03:28.995: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 16 13:03:28.995: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 16 13:03:29.003: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 16 13:03:29.003: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 16 13:03:29.003: INFO: e2e test version: v1.19.0-alpha.0.779+84dc7046797aad Mar 16 13:03:29.004: INFO: kube-apiserver version: v1.17.0 Mar 16 13:03:29.004: INFO: >>> kubeConfig: /root/.kube/config Mar 16 13:03:29.009: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:03:29.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred Mar 16 13:03:29.074: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 16 13:03:29.076: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 13:03:29.098: INFO: Waiting for terminating namespaces to be deleted... Mar 16 13:03:29.100: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 16 13:03:29.112: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:03:29.112: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:03:29.113: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:03:29.113: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:03:29.113: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 16 13:03:29.127: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:03:29.127: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:03:29.127: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:03:29.127: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7a635e77-70e4-4972-a427-88c0f8fc974d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-7a635e77-70e4-4972-a427-88c0f8fc974d off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-7a635e77-70e4-4972-a427-88c0f8fc974d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:37.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9361" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.300 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":1,"skipped":14,"failed":0} [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:37.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:08:37.471: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Mar 16 13:08:37.763: INFO: stderr: "" Mar 16 13:08:37.763: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.0.779+84dc7046797aad\", GitCommit:\"84dc7046797aad80f258b6740a98e79199c8bb4d\", GitTreeState:\"clean\", BuildDate:\"2020-03-15T16:56:42Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:37.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6742" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":2,"skipped":14,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:37.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-f0c28e80-c34f-41ac-be75-b09c471ff7e7 STEP: Creating a pod to test consume configMaps Mar 16 13:08:37.924: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b" in namespace "projected-1183" to be "Succeeded or Failed" Mar 16 13:08:37.961: INFO: Pod "pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.719589ms Mar 16 13:08:39.965: INFO: Pod "pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041690519s Mar 16 13:08:41.969: INFO: Pod "pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045682817s STEP: Saw pod success Mar 16 13:08:41.969: INFO: Pod "pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b" satisfied condition "Succeeded or Failed" Mar 16 13:08:41.972: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:08:42.023: INFO: Waiting for pod pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b to disappear Mar 16 13:08:42.051: INFO: Pod pod-projected-configmaps-0dc24dba-cf1c-4e9c-bf28-1bb6805b549b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:42.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1183" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":26,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:42.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium Mar 16 13:08:42.130: INFO: Waiting up to 5m0s for pod "pod-06ff2762-d77c-4645-99cd-f5de1a499ba7" in namespace "emptydir-5275" to be "Succeeded or Failed" Mar 16 13:08:42.134: INFO: Pod "pod-06ff2762-d77c-4645-99cd-f5de1a499ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158174ms Mar 16 13:08:44.137: INFO: Pod "pod-06ff2762-d77c-4645-99cd-f5de1a499ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006834092s Mar 16 13:08:46.141: INFO: Pod "pod-06ff2762-d77c-4645-99cd-f5de1a499ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011178112s STEP: Saw pod success Mar 16 13:08:46.141: INFO: Pod "pod-06ff2762-d77c-4645-99cd-f5de1a499ba7" satisfied condition "Succeeded or Failed" Mar 16 13:08:46.144: INFO: Trying to get logs from node latest-worker2 pod pod-06ff2762-d77c-4645-99cd-f5de1a499ba7 container test-container: STEP: delete the pod Mar 16 13:08:46.176: INFO: Waiting for pod pod-06ff2762-d77c-4645-99cd-f5de1a499ba7 to disappear Mar 16 13:08:46.181: INFO: Pod pod-06ff2762-d77c-4645-99cd-f5de1a499ba7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:46.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5275" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":33,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:46.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 13:08:46.417: INFO: Waiting up to 5m0s for pod "pod-0161c7ca-5600-4c43-87e9-55fe45068b56" in namespace "emptydir-2553" to be "Succeeded or Failed" Mar 16 13:08:46.476: INFO: Pod "pod-0161c7ca-5600-4c43-87e9-55fe45068b56": Phase="Pending", Reason="", readiness=false. Elapsed: 59.715195ms Mar 16 13:08:48.481: INFO: Pod "pod-0161c7ca-5600-4c43-87e9-55fe45068b56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064587829s Mar 16 13:08:50.485: INFO: Pod "pod-0161c7ca-5600-4c43-87e9-55fe45068b56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068503497s STEP: Saw pod success Mar 16 13:08:50.485: INFO: Pod "pod-0161c7ca-5600-4c43-87e9-55fe45068b56" satisfied condition "Succeeded or Failed" Mar 16 13:08:50.488: INFO: Trying to get logs from node latest-worker2 pod pod-0161c7ca-5600-4c43-87e9-55fe45068b56 container test-container: STEP: delete the pod Mar 16 13:08:50.522: INFO: Waiting for pod pod-0161c7ca-5600-4c43-87e9-55fe45068b56 to disappear Mar 16 13:08:50.534: INFO: Pod pod-0161c7ca-5600-4c43-87e9-55fe45068b56 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:50.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2553" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:50.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Mar 16 13:08:54.645: INFO: Pod pod-hostip-3924da52-04c0-4e92-8664-71115fbd2954 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:08:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5176" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":132,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:08:54.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-bc69b666-4f2d-4814-bce9-1239f1c93072 STEP: Creating a pod to test consume secrets Mar 16 13:08:54.747: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419" in namespace "projected-3796" to be "Succeeded or Failed" Mar 16 13:08:54.751: INFO: Pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419": Phase="Pending", Reason="", readiness=false. Elapsed: 3.635359ms Mar 16 13:08:56.757: INFO: Pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010281821s Mar 16 13:08:58.770: INFO: Pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023297219s Mar 16 13:09:00.774: INFO: Pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027494872s STEP: Saw pod success Mar 16 13:09:00.775: INFO: Pod "pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419" satisfied condition "Succeeded or Failed" Mar 16 13:09:00.778: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:09:00.801: INFO: Waiting for pod pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419 to disappear Mar 16 13:09:00.805: INFO: Pod pod-projected-secrets-3f6f35ed-78b1-4174-b2d6-1bdc5255f419 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:00.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3796" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":150,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:00.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:09:00.936: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:02.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6921" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":275,"completed":8,"skipped":154,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:02.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a3cb60db-aced-47ae-bb3d-6a416a6c5567 STEP: Creating a pod to test consume configMaps Mar 16 13:09:02.183: INFO: Waiting up to 5m0s for pod "pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac" in namespace "configmap-6795" to be "Succeeded or Failed" Mar 16 13:09:02.210: INFO: Pod "pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac": Phase="Pending", Reason="", readiness=false. Elapsed: 26.798839ms Mar 16 13:09:04.352: INFO: Pod "pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168555208s Mar 16 13:09:06.356: INFO: Pod "pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173072627s STEP: Saw pod success Mar 16 13:09:06.356: INFO: Pod "pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac" satisfied condition "Succeeded or Failed" Mar 16 13:09:06.360: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac container configmap-volume-test: STEP: delete the pod Mar 16 13:09:06.384: INFO: Waiting for pod pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac to disappear Mar 16 13:09:06.387: INFO: Pod pod-configmaps-051ccc44-034b-498e-ba40-2d9364c4acac no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:06.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6795" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:06.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 16 13:09:10.964: INFO: Successfully updated pod "annotationupdate7fcfaf5d-ef16-49cc-b8ff-0b6a90114033" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:15.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-660" for this suite. • [SLOW TEST:8.839 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:15.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 16 13:09:20.405: INFO: Successfully updated pod "labelsupdate69449bbe-ca95-4e99-9d77-bf1aa869d2e6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:22.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3539" for this suite. • [SLOW TEST:7.442 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":224,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:22.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 16 13:09:22.940: INFO: Created pod &Pod{ObjectMeta:{dns-542 dns-542 /api/v1/namespaces/dns-542/pods/dns-542 d74e904c-d0ac-44f3-8015-420b12e21ba2 266897 0 2020-03-16 13:09:22 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jg4lh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jg4lh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jg4lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:09:22.957: INFO: The status of Pod dns-542 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:09:25.034: INFO: The status of Pod dns-542 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:09:26.961: INFO: The status of Pod dns-542 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Mar 16 13:09:26.961: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-542 PodName:dns-542 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:09:26.961: INFO: >>> kubeConfig: /root/.kube/config I0316 13:09:26.999259 7 log.go:172] (0xc002e3ac60) (0xc001044460) Create stream I0316 13:09:26.999291 7 log.go:172] (0xc002e3ac60) (0xc001044460) Stream added, broadcasting: 1 I0316 13:09:27.005590 7 log.go:172] (0xc002e3ac60) Reply frame received for 1 I0316 13:09:27.005672 7 log.go:172] (0xc002e3ac60) (0xc0013a8000) Create stream I0316 13:09:27.005699 7 log.go:172] (0xc002e3ac60) (0xc0013a8000) Stream added, broadcasting: 3 I0316 13:09:27.007302 7 log.go:172] (0xc002e3ac60) Reply frame received for 3 I0316 13:09:27.007324 7 log.go:172] (0xc002e3ac60) (0xc000d70e60) Create stream I0316 13:09:27.007332 7 log.go:172] (0xc002e3ac60) (0xc000d70e60) Stream added, broadcasting: 5 I0316 13:09:27.008322 7 log.go:172] (0xc002e3ac60) Reply frame received for 5 I0316 13:09:27.105994 7 log.go:172] (0xc002e3ac60) Data frame received for 3 I0316 13:09:27.106108 7 log.go:172] (0xc0013a8000) (3) Data frame handling I0316 13:09:27.106164 7 log.go:172] (0xc0013a8000) (3) Data frame sent I0316 13:09:27.106749 7 log.go:172] (0xc002e3ac60) Data frame received for 5 I0316 13:09:27.106793 7 log.go:172] (0xc000d70e60) (5) Data frame handling I0316 13:09:27.106840 7 log.go:172] (0xc002e3ac60) Data frame received for 3 I0316 13:09:27.106857 7 log.go:172] (0xc0013a8000) (3) Data frame handling I0316 13:09:27.108641 7 log.go:172] (0xc002e3ac60) Data frame received for 1 I0316 13:09:27.108662 7 log.go:172] (0xc001044460) (1) Data frame handling I0316 13:09:27.108687 7 log.go:172] (0xc001044460) (1) Data frame sent I0316 13:09:27.108710 7 log.go:172] (0xc002e3ac60) (0xc001044460) Stream removed, broadcasting: 1 I0316 13:09:27.108747 7 log.go:172] (0xc002e3ac60) Go away received I0316 13:09:27.109091 7 log.go:172] (0xc002e3ac60) (0xc001044460) Stream removed, broadcasting: 1 I0316 13:09:27.109134 7 log.go:172] (0xc002e3ac60) (0xc0013a8000) Stream removed, broadcasting: 3 I0316 13:09:27.109150 7 log.go:172] (0xc002e3ac60) (0xc000d70e60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 16 13:09:27.109: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-542 PodName:dns-542 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:09:27.109: INFO: >>> kubeConfig: /root/.kube/config I0316 13:09:27.143893 7 log.go:172] (0xc002d90c60) (0xc000d71180) Create stream I0316 13:09:27.143923 7 log.go:172] (0xc002d90c60) (0xc000d71180) Stream added, broadcasting: 1 I0316 13:09:27.146580 7 log.go:172] (0xc002d90c60) Reply frame received for 1 I0316 13:09:27.146631 7 log.go:172] (0xc002d90c60) (0xc0013a80a0) Create stream I0316 13:09:27.146647 7 log.go:172] (0xc002d90c60) (0xc0013a80a0) Stream added, broadcasting: 3 I0316 13:09:27.147504 7 log.go:172] (0xc002d90c60) Reply frame received for 3 I0316 13:09:27.147534 7 log.go:172] (0xc002d90c60) (0xc0009fcfa0) Create stream I0316 13:09:27.147544 7 log.go:172] (0xc002d90c60) (0xc0009fcfa0) Stream added, broadcasting: 5 I0316 13:09:27.148390 7 log.go:172] (0xc002d90c60) Reply frame received for 5 I0316 13:09:27.223370 7 log.go:172] (0xc002d90c60) Data frame received for 3 I0316 13:09:27.223404 7 log.go:172] (0xc0013a80a0) (3) Data frame handling I0316 13:09:27.223423 7 log.go:172] (0xc0013a80a0) (3) Data frame sent I0316 13:09:27.224039 7 log.go:172] (0xc002d90c60) Data frame received for 3 I0316 13:09:27.224065 7 log.go:172] (0xc0013a80a0) (3) Data frame handling I0316 13:09:27.224179 7 log.go:172] (0xc002d90c60) Data frame received for 5 I0316 13:09:27.224208 7 log.go:172] (0xc0009fcfa0) (5) Data frame handling I0316 13:09:27.225726 7 log.go:172] (0xc002d90c60) Data frame received for 1 I0316 13:09:27.225758 7 log.go:172] (0xc000d71180) (1) Data frame handling I0316 13:09:27.225798 7 log.go:172] (0xc000d71180) (1) Data frame sent I0316 13:09:27.225853 7 log.go:172] (0xc002d90c60) (0xc000d71180) Stream removed, broadcasting: 1 I0316 13:09:27.225891 7 log.go:172] (0xc002d90c60) Go away received I0316 13:09:27.225955 7 log.go:172] (0xc002d90c60) (0xc000d71180) Stream removed, broadcasting: 1 I0316 13:09:27.225971 7 log.go:172] (0xc002d90c60) (0xc0013a80a0) Stream removed, broadcasting: 3 I0316 13:09:27.225981 7 log.go:172] (0xc002d90c60) (0xc0009fcfa0) Stream removed, broadcasting: 5 Mar 16 13:09:27.225: INFO: Deleting pod dns-542... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:27.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-542" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":12,"skipped":230,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:27.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:09:28.999: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:09:31.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960968, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960968, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960969, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960968, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:09:34.022: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:46.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2662" for this suite. STEP: Destroying namespace "webhook-2662-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.032 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":13,"skipped":241,"failed":0} [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:46.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 16 13:09:46.461: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Mar 16 13:09:47.726: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 16 13:09:49.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960987, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960987, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960987, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719960987, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:09:52.340: INFO: Waited 521.524484ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:53.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7141" for this suite. • [SLOW TEST:6.965 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":14,"skipped":241,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:53.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-8462/secret-test-9adff1f1-3b37-499a-bcff-fc37f7d6db3f STEP: Creating a pod to test consume secrets Mar 16 13:09:53.773: INFO: Waiting up to 5m0s for pod "pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521" in namespace "secrets-8462" to be "Succeeded or Failed" Mar 16 13:09:53.790: INFO: Pod "pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521": Phase="Pending", Reason="", readiness=false. Elapsed: 16.926956ms Mar 16 13:09:55.837: INFO: Pod "pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064535965s Mar 16 13:09:57.855: INFO: Pod "pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082060555s STEP: Saw pod success Mar 16 13:09:57.855: INFO: Pod "pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521" satisfied condition "Succeeded or Failed" Mar 16 13:09:57.857: INFO: Trying to get logs from node latest-worker pod pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521 container env-test: STEP: delete the pod Mar 16 13:09:57.874: INFO: Waiting for pod pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521 to disappear Mar 16 13:09:57.879: INFO: Pod pod-configmaps-71c5c058-93ad-49cb-9cf5-fb70118b4521 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:09:57.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8462" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":242,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:09:57.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-5c0d55a4-0cf4-4382-a0a7-212b408f552d STEP: Creating secret with name s-test-opt-upd-93055aad-6859-4a3c-a0c3-0c39061fa812 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-5c0d55a4-0cf4-4382-a0a7-212b408f552d STEP: Updating secret s-test-opt-upd-93055aad-6859-4a3c-a0c3-0c39061fa812 STEP: Creating secret with name s-test-opt-create-ca1437ae-aa3e-4603-983b-19054e0bf8d1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:11:14.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8362" for this suite. • [SLOW TEST:76.668 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:11:14.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 16 13:11:14.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 13:11:14.640: INFO: Waiting for terminating namespaces to be deleted... Mar 16 13:11:14.642: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 16 13:11:14.646: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:11:14.647: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:11:14.647: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:11:14.647: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:11:14.647: INFO: pod-projected-secrets-6bd0ea3e-d343-45e6-a47e-c43409eb131d from projected-8362 started at 2020-03-16 13:09:58 +0000 UTC (3 container statuses recorded) Mar 16 13:11:14.647: INFO: Container creates-volume-test ready: true, restart count 0 Mar 16 13:11:14.647: INFO: Container dels-volume-test ready: true, restart count 0 Mar 16 13:11:14.647: INFO: Container upds-volume-test ready: true, restart count 0 Mar 16 13:11:14.647: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 16 13:11:14.662: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:11:14.662: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:11:14.662: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:11:14.662: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15fccae9db135d61], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:11:15.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4925" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":17,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:11:15.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-4124d307-d47a-485b-810e-90966992a49e in namespace container-probe-5894 Mar 16 13:11:19.772: INFO: Started pod busybox-4124d307-d47a-485b-810e-90966992a49e in namespace container-probe-5894 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:11:19.776: INFO: Initial restart count of pod busybox-4124d307-d47a-485b-810e-90966992a49e is 0 Mar 16 13:12:09.890: INFO: Restart count of pod container-probe-5894/busybox-4124d307-d47a-485b-810e-90966992a49e is now 1 (50.113963023s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:09.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5894" for this suite. • [SLOW TEST:54.222 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":297,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:09.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-498c4422-b0a7-4155-9247-3361d4afeba2 STEP: Creating a pod to test consume secrets Mar 16 13:12:09.995: INFO: Waiting up to 5m0s for pod "pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134" in namespace "secrets-1367" to be "Succeeded or Failed" Mar 16 13:12:10.013: INFO: Pod "pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134": Phase="Pending", Reason="", readiness=false. Elapsed: 17.903418ms Mar 16 13:12:12.017: INFO: Pod "pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021463204s Mar 16 13:12:14.021: INFO: Pod "pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026051529s STEP: Saw pod success Mar 16 13:12:14.021: INFO: Pod "pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134" satisfied condition "Succeeded or Failed" Mar 16 13:12:14.024: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134 container secret-volume-test: STEP: delete the pod Mar 16 13:12:14.122: INFO: Waiting for pod pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134 to disappear Mar 16 13:12:14.126: INFO: Pod pod-secrets-7aebf3cb-5bd0-4f6a-8de8-f49cfee9e134 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:14.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1367" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":19,"skipped":301,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:14.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:12:14.196: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 13:12:17.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9775 create -f -' Mar 16 13:12:19.599: INFO: stderr: "" Mar 16 13:12:19.599: INFO: stdout: "e2e-test-crd-publish-openapi-7929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 16 13:12:19.599: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-7929-crds test-cr' Mar 16 13:12:19.694: INFO: stderr: "" Mar 16 13:12:19.694: INFO: stdout: "e2e-test-crd-publish-openapi-7929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 16 13:12:19.695: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9775 apply -f -' Mar 16 13:12:19.919: INFO: stderr: "" Mar 16 13:12:19.919: INFO: stdout: "e2e-test-crd-publish-openapi-7929-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 16 13:12:19.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9775 delete e2e-test-crd-publish-openapi-7929-crds test-cr' Mar 16 13:12:20.016: INFO: stderr: "" Mar 16 13:12:20.016: INFO: stdout: "e2e-test-crd-publish-openapi-7929-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 16 13:12:20.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7929-crds' Mar 16 13:12:20.233: INFO: stderr: "" Mar 16 13:12:20.233: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7929-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:22.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9775" for this suite. • [SLOW TEST:7.998 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":20,"skipped":301,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:22.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments Mar 16 13:12:22.183: INFO: Waiting up to 5m0s for pod "client-containers-f32121c7-c85b-4e1e-b297-84a96780a089" in namespace "containers-3118" to be "Succeeded or Failed" Mar 16 13:12:22.187: INFO: Pod "client-containers-f32121c7-c85b-4e1e-b297-84a96780a089": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033505ms Mar 16 13:12:24.210: INFO: Pod "client-containers-f32121c7-c85b-4e1e-b297-84a96780a089": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026594189s Mar 16 13:12:26.214: INFO: Pod "client-containers-f32121c7-c85b-4e1e-b297-84a96780a089": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030693496s STEP: Saw pod success Mar 16 13:12:26.214: INFO: Pod "client-containers-f32121c7-c85b-4e1e-b297-84a96780a089" satisfied condition "Succeeded or Failed" Mar 16 13:12:26.217: INFO: Trying to get logs from node latest-worker2 pod client-containers-f32121c7-c85b-4e1e-b297-84a96780a089 container test-container: STEP: delete the pod Mar 16 13:12:26.237: INFO: Waiting for pod client-containers-f32121c7-c85b-4e1e-b297-84a96780a089 to disappear Mar 16 13:12:26.241: INFO: Pod client-containers-f32121c7-c85b-4e1e-b297-84a96780a089 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:26.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3118" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:26.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:31.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9726" for this suite. • [SLOW TEST:5.144 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":22,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:31.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7670 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:12:31.491: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 16 13:12:31.538: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:12:33.543: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:12:35.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:37.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:39.543: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:41.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:43.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:45.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:47.558: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:12:49.576: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 16 13:12:49.581: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 16 13:12:53.627: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.26:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7670 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:12:53.628: INFO: >>> kubeConfig: /root/.kube/config I0316 13:12:53.661685 7 log.go:172] (0xc0045b9760) (0xc000447ae0) Create stream I0316 13:12:53.661716 7 log.go:172] (0xc0045b9760) (0xc000447ae0) Stream added, broadcasting: 1 I0316 13:12:53.663758 7 log.go:172] (0xc0045b9760) Reply frame received for 1 I0316 13:12:53.663814 7 log.go:172] (0xc0045b9760) (0xc000c5a0a0) Create stream I0316 13:12:53.663830 7 log.go:172] (0xc0045b9760) (0xc000c5a0a0) Stream added, broadcasting: 3 I0316 13:12:53.664766 7 log.go:172] (0xc0045b9760) Reply frame received for 3 I0316 13:12:53.664817 7 log.go:172] (0xc0045b9760) (0xc00036abe0) Create stream I0316 13:12:53.664836 7 log.go:172] (0xc0045b9760) (0xc00036abe0) Stream added, broadcasting: 5 I0316 13:12:53.665828 7 log.go:172] (0xc0045b9760) Reply frame received for 5 I0316 13:12:53.766258 7 log.go:172] (0xc0045b9760) Data frame received for 3 I0316 13:12:53.766297 7 log.go:172] (0xc000c5a0a0) (3) Data frame handling I0316 13:12:53.766325 7 log.go:172] (0xc000c5a0a0) (3) Data frame sent I0316 13:12:53.766343 7 log.go:172] (0xc0045b9760) Data frame received for 3 I0316 13:12:53.766359 7 log.go:172] (0xc000c5a0a0) (3) Data frame handling I0316 13:12:53.766527 7 log.go:172] (0xc0045b9760) Data frame received for 5 I0316 13:12:53.766623 7 log.go:172] (0xc00036abe0) (5) Data frame handling I0316 13:12:53.768562 7 log.go:172] (0xc0045b9760) Data frame received for 1 I0316 13:12:53.768584 7 log.go:172] (0xc000447ae0) (1) Data frame handling I0316 13:12:53.768598 7 log.go:172] (0xc000447ae0) (1) Data frame sent I0316 13:12:53.768616 7 log.go:172] (0xc0045b9760) (0xc000447ae0) Stream removed, broadcasting: 1 I0316 13:12:53.768720 7 log.go:172] (0xc0045b9760) (0xc000447ae0) Stream removed, broadcasting: 1 I0316 13:12:53.768744 7 log.go:172] (0xc0045b9760) Go away received I0316 13:12:53.768812 7 log.go:172] (0xc0045b9760) (0xc000c5a0a0) Stream removed, broadcasting: 3 I0316 13:12:53.768857 7 log.go:172] (0xc0045b9760) (0xc00036abe0) Stream removed, broadcasting: 5 Mar 16 13:12:53.768: INFO: Found all expected endpoints: [netserver-0] Mar 16 13:12:53.772: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.222:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7670 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:12:53.772: INFO: >>> kubeConfig: /root/.kube/config I0316 13:12:53.810086 7 log.go:172] (0xc004bc46e0) (0xc00036b4a0) Create stream I0316 13:12:53.810121 7 log.go:172] (0xc004bc46e0) (0xc00036b4a0) Stream added, broadcasting: 1 I0316 13:12:53.817443 7 log.go:172] (0xc004bc46e0) Reply frame received for 1 I0316 13:12:53.817483 7 log.go:172] (0xc004bc46e0) (0xc000d30000) Create stream I0316 13:12:53.817493 7 log.go:172] (0xc004bc46e0) (0xc000d30000) Stream added, broadcasting: 3 I0316 13:12:53.818320 7 log.go:172] (0xc004bc46e0) Reply frame received for 3 I0316 13:12:53.818349 7 log.go:172] (0xc004bc46e0) (0xc000c5a000) Create stream I0316 13:12:53.818360 7 log.go:172] (0xc004bc46e0) (0xc000c5a000) Stream added, broadcasting: 5 I0316 13:12:53.819132 7 log.go:172] (0xc004bc46e0) Reply frame received for 5 I0316 13:12:53.886263 7 log.go:172] (0xc004bc46e0) Data frame received for 3 I0316 13:12:53.886292 7 log.go:172] (0xc000d30000) (3) Data frame handling I0316 13:12:53.886316 7 log.go:172] (0xc000d30000) (3) Data frame sent I0316 13:12:53.886419 7 log.go:172] (0xc004bc46e0) Data frame received for 5 I0316 13:12:53.886442 7 log.go:172] (0xc000c5a000) (5) Data frame handling I0316 13:12:53.886720 7 log.go:172] (0xc004bc46e0) Data frame received for 3 I0316 13:12:53.886744 7 log.go:172] (0xc000d30000) (3) Data frame handling I0316 13:12:53.888460 7 log.go:172] (0xc004bc46e0) Data frame received for 1 I0316 13:12:53.888489 7 log.go:172] (0xc00036b4a0) (1) Data frame handling I0316 13:12:53.888509 7 log.go:172] (0xc00036b4a0) (1) Data frame sent I0316 13:12:53.888535 7 log.go:172] (0xc004bc46e0) (0xc00036b4a0) Stream removed, broadcasting: 1 I0316 13:12:53.888558 7 log.go:172] (0xc004bc46e0) Go away received I0316 13:12:53.888808 7 log.go:172] (0xc004bc46e0) (0xc00036b4a0) Stream removed, broadcasting: 1 I0316 13:12:53.888842 7 log.go:172] (0xc004bc46e0) (0xc000d30000) Stream removed, broadcasting: 3 I0316 13:12:53.888858 7 log.go:172] (0xc004bc46e0) (0xc000c5a000) Stream removed, broadcasting: 5 Mar 16 13:12:53.888: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:53.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7670" for this suite. • [SLOW TEST:22.484 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":363,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:53.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:12:53.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6049" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":24,"skipped":373,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:12:53.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 13:12:54.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7916' Mar 16 13:12:54.117: INFO: stderr: "" Mar 16 13:12:54.118: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 16 13:12:59.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7916 -o json' Mar 16 13:12:59.407: INFO: stderr: "" Mar 16 13:12:59.407: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-16T13:12:54Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7916\",\n \"resourceVersion\": \"267976\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7916/pods/e2e-test-httpd-pod\",\n \"uid\": \"920030d8-b412-42b9-8344-a806f656d9ad\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-dkzk5\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-dkzk5\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-dkzk5\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T13:12:54Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T13:12:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T13:12:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-16T13:12:54Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://20a7e85aada1049b6c83f3131c5156d4f0fc53003bb7977b2f7eb1c07f2f1984\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-16T13:12:56Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.28\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.28\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-16T13:12:54Z\"\n }\n}\n" STEP: replace the image in the pod Mar 16 13:12:59.408: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7916' Mar 16 13:12:59.711: INFO: stderr: "" Mar 16 13:12:59.711: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 Mar 16 13:12:59.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7916' Mar 16 13:13:12.954: INFO: stderr: "" Mar 16 13:13:12.954: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:13:12.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7916" for this suite. • [SLOW TEST:19.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":25,"skipped":378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:13:13.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7113 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-7113 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7113 Mar 16 13:13:14.045: INFO: Found 0 stateful pods, waiting for 1 Mar 16 13:13:24.049: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 16 13:13:24.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:13:24.291: INFO: stderr: "I0316 13:13:24.179320 251 log.go:172] (0xc00097a160) (0xc000681360) Create stream\nI0316 13:13:24.179370 251 log.go:172] (0xc00097a160) (0xc000681360) Stream added, broadcasting: 1\nI0316 13:13:24.181747 251 log.go:172] (0xc00097a160) Reply frame received for 1\nI0316 13:13:24.181800 251 log.go:172] (0xc00097a160) (0xc0008bc000) Create stream\nI0316 13:13:24.181820 251 log.go:172] (0xc00097a160) (0xc0008bc000) Stream added, broadcasting: 3\nI0316 13:13:24.182991 251 log.go:172] (0xc00097a160) Reply frame received for 3\nI0316 13:13:24.183029 251 log.go:172] (0xc00097a160) (0xc0008bc0a0) Create stream\nI0316 13:13:24.183044 251 log.go:172] (0xc00097a160) (0xc0008bc0a0) Stream added, broadcasting: 5\nI0316 13:13:24.184531 251 log.go:172] (0xc00097a160) Reply frame received for 5\nI0316 13:13:24.251317 251 log.go:172] (0xc00097a160) Data frame received for 5\nI0316 13:13:24.251353 251 log.go:172] (0xc0008bc0a0) (5) Data frame handling\nI0316 13:13:24.251380 251 log.go:172] (0xc0008bc0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:13:24.283401 251 log.go:172] (0xc00097a160) Data frame received for 3\nI0316 13:13:24.283447 251 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0316 13:13:24.283495 251 log.go:172] (0xc0008bc000) (3) Data frame sent\nI0316 13:13:24.283751 251 log.go:172] (0xc00097a160) Data frame received for 3\nI0316 13:13:24.283786 251 log.go:172] (0xc0008bc000) (3) Data frame handling\nI0316 13:13:24.284106 251 log.go:172] (0xc00097a160) Data frame received for 5\nI0316 13:13:24.284209 251 log.go:172] (0xc0008bc0a0) (5) Data frame handling\nI0316 13:13:24.286068 251 log.go:172] (0xc00097a160) Data frame received for 1\nI0316 13:13:24.286088 251 log.go:172] (0xc000681360) (1) Data frame handling\nI0316 13:13:24.286102 251 log.go:172] (0xc000681360) (1) Data frame sent\nI0316 13:13:24.286124 251 log.go:172] (0xc00097a160) (0xc000681360) Stream removed, broadcasting: 1\nI0316 13:13:24.286164 251 log.go:172] (0xc00097a160) Go away received\nI0316 13:13:24.286514 251 log.go:172] (0xc00097a160) (0xc000681360) Stream removed, broadcasting: 1\nI0316 13:13:24.286535 251 log.go:172] (0xc00097a160) (0xc0008bc000) Stream removed, broadcasting: 3\nI0316 13:13:24.286544 251 log.go:172] (0xc00097a160) (0xc0008bc0a0) Stream removed, broadcasting: 5\n" Mar 16 13:13:24.291: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:13:24.291: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 13:13:24.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 13:13:34.300: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:13:34.300: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:13:34.355: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:34.355: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:24 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:34.355: INFO: Mar 16 13:13:34.355: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 16 13:13:35.359: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.953009552s Mar 16 13:13:36.363: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.94890357s Mar 16 13:13:37.541: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.944404739s Mar 16 13:13:38.545: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.766393494s Mar 16 13:13:39.552: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.760664839s Mar 16 13:13:40.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.756138582s Mar 16 13:13:41.563: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.749784152s Mar 16 13:13:42.567: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.744600995s Mar 16 13:13:43.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 740.505538ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7113 Mar 16 13:13:44.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 13:13:44.809: INFO: stderr: "I0316 13:13:44.710974 272 log.go:172] (0xc0007eab00) (0xc0007e6320) Create stream\nI0316 13:13:44.711042 272 log.go:172] (0xc0007eab00) (0xc0007e6320) Stream added, broadcasting: 1\nI0316 13:13:44.716648 272 log.go:172] (0xc0007eab00) Reply frame received for 1\nI0316 13:13:44.716700 272 log.go:172] (0xc0007eab00) (0xc00065f400) Create stream\nI0316 13:13:44.716714 272 log.go:172] (0xc0007eab00) (0xc00065f400) Stream added, broadcasting: 3\nI0316 13:13:44.718140 272 log.go:172] (0xc0007eab00) Reply frame received for 3\nI0316 13:13:44.718208 272 log.go:172] (0xc0007eab00) (0xc0007e63c0) Create stream\nI0316 13:13:44.718245 272 log.go:172] (0xc0007eab00) (0xc0007e63c0) Stream added, broadcasting: 5\nI0316 13:13:44.719071 272 log.go:172] (0xc0007eab00) Reply frame received for 5\nI0316 13:13:44.803129 272 log.go:172] (0xc0007eab00) Data frame received for 5\nI0316 13:13:44.803193 272 log.go:172] (0xc0007e63c0) (5) Data frame handling\nI0316 13:13:44.803220 272 log.go:172] (0xc0007e63c0) (5) Data frame sent\nI0316 13:13:44.803240 272 log.go:172] (0xc0007eab00) Data frame received for 5\nI0316 13:13:44.803260 272 log.go:172] (0xc0007e63c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 13:13:44.803298 272 log.go:172] (0xc0007eab00) Data frame received for 3\nI0316 13:13:44.803331 272 log.go:172] (0xc00065f400) (3) Data frame handling\nI0316 13:13:44.803420 272 log.go:172] (0xc00065f400) (3) Data frame sent\nI0316 13:13:44.803432 272 log.go:172] (0xc0007eab00) Data frame received for 3\nI0316 13:13:44.803440 272 log.go:172] (0xc00065f400) (3) Data frame handling\nI0316 13:13:44.804798 272 log.go:172] (0xc0007eab00) Data frame received for 1\nI0316 13:13:44.804820 272 log.go:172] (0xc0007e6320) (1) Data frame handling\nI0316 13:13:44.804855 272 log.go:172] (0xc0007e6320) (1) Data frame sent\nI0316 13:13:44.804905 272 log.go:172] (0xc0007eab00) (0xc0007e6320) Stream removed, broadcasting: 1\nI0316 13:13:44.804954 272 log.go:172] (0xc0007eab00) Go away received\nI0316 13:13:44.805643 272 log.go:172] (0xc0007eab00) (0xc0007e6320) Stream removed, broadcasting: 1\nI0316 13:13:44.805667 272 log.go:172] (0xc0007eab00) (0xc00065f400) Stream removed, broadcasting: 3\nI0316 13:13:44.805678 272 log.go:172] (0xc0007eab00) (0xc0007e63c0) Stream removed, broadcasting: 5\n" Mar 16 13:13:44.810: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 13:13:44.810: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 13:13:44.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 13:13:45.013: INFO: stderr: "I0316 13:13:44.942150 294 log.go:172] (0xc00095aa50) (0xc0009040a0) Create stream\nI0316 13:13:44.942199 294 log.go:172] (0xc00095aa50) (0xc0009040a0) Stream added, broadcasting: 1\nI0316 13:13:44.948927 294 log.go:172] (0xc00095aa50) Reply frame received for 1\nI0316 13:13:44.948990 294 log.go:172] (0xc00095aa50) (0xc0006d14a0) Create stream\nI0316 13:13:44.949007 294 log.go:172] (0xc00095aa50) (0xc0006d14a0) Stream added, broadcasting: 3\nI0316 13:13:44.950211 294 log.go:172] (0xc00095aa50) Reply frame received for 3\nI0316 13:13:44.950263 294 log.go:172] (0xc00095aa50) (0xc0006d1680) Create stream\nI0316 13:13:44.950279 294 log.go:172] (0xc00095aa50) (0xc0006d1680) Stream added, broadcasting: 5\nI0316 13:13:44.951205 294 log.go:172] (0xc00095aa50) Reply frame received for 5\nI0316 13:13:45.008145 294 log.go:172] (0xc00095aa50) Data frame received for 5\nI0316 13:13:45.008169 294 log.go:172] (0xc0006d1680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 13:13:45.008191 294 log.go:172] (0xc00095aa50) Data frame received for 3\nI0316 13:13:45.008229 294 log.go:172] (0xc0006d14a0) (3) Data frame handling\nI0316 13:13:45.008251 294 log.go:172] (0xc0006d14a0) (3) Data frame sent\nI0316 13:13:45.008275 294 log.go:172] (0xc0006d1680) (5) Data frame sent\nI0316 13:13:45.008292 294 log.go:172] (0xc00095aa50) Data frame received for 5\nI0316 13:13:45.008309 294 log.go:172] (0xc0006d1680) (5) Data frame handling\nI0316 13:13:45.008416 294 log.go:172] (0xc00095aa50) Data frame received for 3\nI0316 13:13:45.008433 294 log.go:172] (0xc0006d14a0) (3) Data frame handling\nI0316 13:13:45.010171 294 log.go:172] (0xc00095aa50) Data frame received for 1\nI0316 13:13:45.010195 294 log.go:172] (0xc0009040a0) (1) Data frame handling\nI0316 13:13:45.010211 294 log.go:172] (0xc0009040a0) (1) Data frame sent\nI0316 13:13:45.010226 294 log.go:172] (0xc00095aa50) (0xc0009040a0) Stream removed, broadcasting: 1\nI0316 13:13:45.010241 294 log.go:172] (0xc00095aa50) Go away received\nI0316 13:13:45.010594 294 log.go:172] (0xc00095aa50) (0xc0009040a0) Stream removed, broadcasting: 1\nI0316 13:13:45.010612 294 log.go:172] (0xc00095aa50) (0xc0006d14a0) Stream removed, broadcasting: 3\nI0316 13:13:45.010621 294 log.go:172] (0xc00095aa50) (0xc0006d1680) Stream removed, broadcasting: 5\n" Mar 16 13:13:45.014: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 13:13:45.014: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 13:13:45.014: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 13:13:45.221: INFO: stderr: "I0316 13:13:45.135778 317 log.go:172] (0xc000afadc0) (0xc000a125a0) Create stream\nI0316 13:13:45.135843 317 log.go:172] (0xc000afadc0) (0xc000a125a0) Stream added, broadcasting: 1\nI0316 13:13:45.141056 317 log.go:172] (0xc000afadc0) Reply frame received for 1\nI0316 13:13:45.141098 317 log.go:172] (0xc000afadc0) (0xc0005e3540) Create stream\nI0316 13:13:45.141229 317 log.go:172] (0xc000afadc0) (0xc0005e3540) Stream added, broadcasting: 3\nI0316 13:13:45.142331 317 log.go:172] (0xc000afadc0) Reply frame received for 3\nI0316 13:13:45.142355 317 log.go:172] (0xc000afadc0) (0xc00042a960) Create stream\nI0316 13:13:45.142363 317 log.go:172] (0xc000afadc0) (0xc00042a960) Stream added, broadcasting: 5\nI0316 13:13:45.143058 317 log.go:172] (0xc000afadc0) Reply frame received for 5\nI0316 13:13:45.217029 317 log.go:172] (0xc000afadc0) Data frame received for 3\nI0316 13:13:45.217072 317 log.go:172] (0xc0005e3540) (3) Data frame handling\nI0316 13:13:45.217086 317 log.go:172] (0xc0005e3540) (3) Data frame sent\nI0316 13:13:45.217094 317 log.go:172] (0xc000afadc0) Data frame received for 3\nI0316 13:13:45.217101 317 log.go:172] (0xc0005e3540) (3) Data frame handling\nI0316 13:13:45.217217 317 log.go:172] (0xc000afadc0) Data frame received for 5\nI0316 13:13:45.217231 317 log.go:172] (0xc00042a960) (5) Data frame handling\nI0316 13:13:45.217245 317 log.go:172] (0xc00042a960) (5) Data frame sent\nI0316 13:13:45.217254 317 log.go:172] (0xc000afadc0) Data frame received for 5\nI0316 13:13:45.217261 317 log.go:172] (0xc00042a960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0316 13:13:45.218422 317 log.go:172] (0xc000afadc0) Data frame received for 1\nI0316 13:13:45.218435 317 log.go:172] (0xc000a125a0) (1) Data frame handling\nI0316 13:13:45.218447 317 log.go:172] (0xc000a125a0) (1) Data frame sent\nI0316 13:13:45.218751 317 log.go:172] (0xc000afadc0) (0xc000a125a0) Stream removed, broadcasting: 1\nI0316 13:13:45.218794 317 log.go:172] (0xc000afadc0) Go away received\nI0316 13:13:45.219087 317 log.go:172] (0xc000afadc0) (0xc000a125a0) Stream removed, broadcasting: 1\nI0316 13:13:45.219101 317 log.go:172] (0xc000afadc0) (0xc0005e3540) Stream removed, broadcasting: 3\nI0316 13:13:45.219111 317 log.go:172] (0xc000afadc0) (0xc00042a960) Stream removed, broadcasting: 5\n" Mar 16 13:13:45.221: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 13:13:45.221: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 13:13:45.225: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:13:45.225: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:13:45.225: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 16 13:13:45.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:13:45.405: INFO: stderr: "I0316 13:13:45.350019 337 log.go:172] (0xc00003ae70) (0xc000568a00) Create stream\nI0316 13:13:45.350084 337 log.go:172] (0xc00003ae70) (0xc000568a00) Stream added, broadcasting: 1\nI0316 13:13:45.352656 337 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0316 13:13:45.352691 337 log.go:172] (0xc00003ae70) (0xc0009b2000) Create stream\nI0316 13:13:45.352701 337 log.go:172] (0xc00003ae70) (0xc0009b2000) Stream added, broadcasting: 3\nI0316 13:13:45.353556 337 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0316 13:13:45.353596 337 log.go:172] (0xc00003ae70) (0xc0005d8000) Create stream\nI0316 13:13:45.353613 337 log.go:172] (0xc00003ae70) (0xc0005d8000) Stream added, broadcasting: 5\nI0316 13:13:45.354467 337 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0316 13:13:45.400393 337 log.go:172] (0xc00003ae70) Data frame received for 3\nI0316 13:13:45.400433 337 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0316 13:13:45.400465 337 log.go:172] (0xc0009b2000) (3) Data frame sent\nI0316 13:13:45.400482 337 log.go:172] (0xc00003ae70) Data frame received for 3\nI0316 13:13:45.400493 337 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0316 13:13:45.400631 337 log.go:172] (0xc00003ae70) Data frame received for 5\nI0316 13:13:45.400663 337 log.go:172] (0xc0005d8000) (5) Data frame handling\nI0316 13:13:45.400684 337 log.go:172] (0xc0005d8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:13:45.400697 337 log.go:172] (0xc00003ae70) Data frame received for 5\nI0316 13:13:45.400727 337 log.go:172] (0xc0005d8000) (5) Data frame handling\nI0316 13:13:45.402540 337 log.go:172] (0xc00003ae70) Data frame received for 1\nI0316 13:13:45.402570 337 log.go:172] (0xc000568a00) (1) Data frame handling\nI0316 13:13:45.402587 337 log.go:172] (0xc000568a00) (1) Data frame sent\nI0316 13:13:45.402606 337 log.go:172] (0xc00003ae70) (0xc000568a00) Stream removed, broadcasting: 1\nI0316 13:13:45.402636 337 log.go:172] (0xc00003ae70) Go away received\nI0316 13:13:45.402991 337 log.go:172] (0xc00003ae70) (0xc000568a00) Stream removed, broadcasting: 1\nI0316 13:13:45.403016 337 log.go:172] (0xc00003ae70) (0xc0009b2000) Stream removed, broadcasting: 3\nI0316 13:13:45.403030 337 log.go:172] (0xc00003ae70) (0xc0005d8000) Stream removed, broadcasting: 5\n" Mar 16 13:13:45.406: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:13:45.406: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 13:13:45.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:13:45.639: INFO: stderr: "I0316 13:13:45.535320 357 log.go:172] (0xc0008fc000) (0xc000801360) Create stream\nI0316 13:13:45.535374 357 log.go:172] (0xc0008fc000) (0xc000801360) Stream added, broadcasting: 1\nI0316 13:13:45.538629 357 log.go:172] (0xc0008fc000) Reply frame received for 1\nI0316 13:13:45.538678 357 log.go:172] (0xc0008fc000) (0xc000940000) Create stream\nI0316 13:13:45.538693 357 log.go:172] (0xc0008fc000) (0xc000940000) Stream added, broadcasting: 3\nI0316 13:13:45.539697 357 log.go:172] (0xc0008fc000) Reply frame received for 3\nI0316 13:13:45.539748 357 log.go:172] (0xc0008fc000) (0xc0009400a0) Create stream\nI0316 13:13:45.539764 357 log.go:172] (0xc0008fc000) (0xc0009400a0) Stream added, broadcasting: 5\nI0316 13:13:45.540640 357 log.go:172] (0xc0008fc000) Reply frame received for 5\nI0316 13:13:45.607468 357 log.go:172] (0xc0008fc000) Data frame received for 5\nI0316 13:13:45.607500 357 log.go:172] (0xc0009400a0) (5) Data frame handling\nI0316 13:13:45.607520 357 log.go:172] (0xc0009400a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:13:45.632124 357 log.go:172] (0xc0008fc000) Data frame received for 3\nI0316 13:13:45.632154 357 log.go:172] (0xc000940000) (3) Data frame handling\nI0316 13:13:45.632169 357 log.go:172] (0xc000940000) (3) Data frame sent\nI0316 13:13:45.632180 357 log.go:172] (0xc0008fc000) Data frame received for 3\nI0316 13:13:45.632190 357 log.go:172] (0xc000940000) (3) Data frame handling\nI0316 13:13:45.632336 357 log.go:172] (0xc0008fc000) Data frame received for 5\nI0316 13:13:45.632370 357 log.go:172] (0xc0009400a0) (5) Data frame handling\nI0316 13:13:45.634433 357 log.go:172] (0xc0008fc000) Data frame received for 1\nI0316 13:13:45.634452 357 log.go:172] (0xc000801360) (1) Data frame handling\nI0316 13:13:45.634462 357 log.go:172] (0xc000801360) (1) Data frame sent\nI0316 13:13:45.634472 357 log.go:172] (0xc0008fc000) (0xc000801360) Stream removed, broadcasting: 1\nI0316 13:13:45.634718 357 log.go:172] (0xc0008fc000) (0xc000801360) Stream removed, broadcasting: 1\nI0316 13:13:45.634732 357 log.go:172] (0xc0008fc000) (0xc000940000) Stream removed, broadcasting: 3\nI0316 13:13:45.634737 357 log.go:172] (0xc0008fc000) (0xc0009400a0) Stream removed, broadcasting: 5\n" Mar 16 13:13:45.639: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:13:45.639: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 13:13:45.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7113 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:13:45.909: INFO: stderr: "I0316 13:13:45.775746 380 log.go:172] (0xc0008c8a50) (0xc0005f5680) Create stream\nI0316 13:13:45.775812 380 log.go:172] (0xc0008c8a50) (0xc0005f5680) Stream added, broadcasting: 1\nI0316 13:13:45.779034 380 log.go:172] (0xc0008c8a50) Reply frame received for 1\nI0316 13:13:45.779079 380 log.go:172] (0xc0008c8a50) (0xc000836000) Create stream\nI0316 13:13:45.779094 380 log.go:172] (0xc0008c8a50) (0xc000836000) Stream added, broadcasting: 3\nI0316 13:13:45.780139 380 log.go:172] (0xc0008c8a50) Reply frame received for 3\nI0316 13:13:45.780177 380 log.go:172] (0xc0008c8a50) (0xc0005f5720) Create stream\nI0316 13:13:45.780190 380 log.go:172] (0xc0008c8a50) (0xc0005f5720) Stream added, broadcasting: 5\nI0316 13:13:45.781098 380 log.go:172] (0xc0008c8a50) Reply frame received for 5\nI0316 13:13:45.844581 380 log.go:172] (0xc0008c8a50) Data frame received for 5\nI0316 13:13:45.844602 380 log.go:172] (0xc0005f5720) (5) Data frame handling\nI0316 13:13:45.844614 380 log.go:172] (0xc0005f5720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:13:45.902167 380 log.go:172] (0xc0008c8a50) Data frame received for 5\nI0316 13:13:45.902297 380 log.go:172] (0xc0008c8a50) Data frame received for 3\nI0316 13:13:45.902330 380 log.go:172] (0xc000836000) (3) Data frame handling\nI0316 13:13:45.902350 380 log.go:172] (0xc000836000) (3) Data frame sent\nI0316 13:13:45.902378 380 log.go:172] (0xc0005f5720) (5) Data frame handling\nI0316 13:13:45.902627 380 log.go:172] (0xc0008c8a50) Data frame received for 3\nI0316 13:13:45.902648 380 log.go:172] (0xc000836000) (3) Data frame handling\nI0316 13:13:45.904016 380 log.go:172] (0xc0008c8a50) Data frame received for 1\nI0316 13:13:45.904042 380 log.go:172] (0xc0005f5680) (1) Data frame handling\nI0316 13:13:45.904063 380 log.go:172] (0xc0005f5680) (1) Data frame sent\nI0316 13:13:45.904220 380 log.go:172] (0xc0008c8a50) (0xc0005f5680) Stream removed, broadcasting: 1\nI0316 13:13:45.904485 380 log.go:172] (0xc0008c8a50) Go away received\nI0316 13:13:45.904783 380 log.go:172] (0xc0008c8a50) (0xc0005f5680) Stream removed, broadcasting: 1\nI0316 13:13:45.904817 380 log.go:172] (0xc0008c8a50) (0xc000836000) Stream removed, broadcasting: 3\nI0316 13:13:45.904840 380 log.go:172] (0xc0008c8a50) (0xc0005f5720) Stream removed, broadcasting: 5\n" Mar 16 13:13:45.909: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:13:45.909: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 13:13:45.909: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:13:45.918: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 16 13:13:55.925: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:13:55.926: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:13:55.926: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 13:13:55.939: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:55.939: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:55.939: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:55.939: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:55.939: INFO: Mar 16 13:13:55.939: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:13:56.967: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:56.967: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:56.967: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:56.967: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:56.967: INFO: Mar 16 13:13:56.967: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:13:57.971: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:57.971: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:57.971: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:57.971: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:34 +0000 UTC }] Mar 16 13:13:57.971: INFO: Mar 16 13:13:57.971: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 16 13:13:58.975: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:58.975: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:58.975: INFO: Mar 16 13:13:58.975: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:13:59.980: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:13:59.980: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:13:59.980: INFO: Mar 16 13:13:59.980: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:14:00.984: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:14:00.984: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:14:00.984: INFO: Mar 16 13:14:00.984: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:14:02.002: INFO: POD NODE PHASE GRACE CONDITIONS Mar 16 13:14:02.002: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-16 13:13:14 +0000 UTC }] Mar 16 13:14:02.002: INFO: Mar 16 13:14:02.002: INFO: StatefulSet ss has not reached scale 0, at 1 Mar 16 13:14:03.006: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.929869184s Mar 16 13:14:04.009: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.926412746s Mar 16 13:14:05.013: INFO: Verifying statefulset ss doesn't scale past 0 for another 922.906845ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7113 Mar 16 13:14:06.017: INFO: Scaling statefulset ss to 0 Mar 16 13:14:06.026: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 13:14:06.029: INFO: Deleting all statefulset in ns statefulset-7113 Mar 16 13:14:06.031: INFO: Scaling statefulset ss to 0 Mar 16 13:14:06.040: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:14:06.043: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:06.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7113" for this suite. • [SLOW TEST:53.054 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":26,"skipped":410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:06.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 13:14:06.137: INFO: Waiting up to 5m0s for pod "pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43" in namespace "emptydir-4311" to be "Succeeded or Failed" Mar 16 13:14:06.154: INFO: Pod "pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43": Phase="Pending", Reason="", readiness=false. Elapsed: 17.068057ms Mar 16 13:14:08.158: INFO: Pod "pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021157371s Mar 16 13:14:10.170: INFO: Pod "pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032768364s STEP: Saw pod success Mar 16 13:14:10.170: INFO: Pod "pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43" satisfied condition "Succeeded or Failed" Mar 16 13:14:10.173: INFO: Trying to get logs from node latest-worker pod pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43 container test-container: STEP: delete the pod Mar 16 13:14:10.207: INFO: Waiting for pod pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43 to disappear Mar 16 13:14:10.210: INFO: Pod pod-da41ed97-fb08-44b9-8c5d-6c061d3bfa43 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:10.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4311" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":440,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:10.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:14:10.292: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9dbb10da-b9c1-4b5d-b578-053dc20e1c61" in namespace "security-context-test-5694" to be "Succeeded or Failed" Mar 16 13:14:10.315: INFO: Pod "busybox-readonly-false-9dbb10da-b9c1-4b5d-b578-053dc20e1c61": Phase="Pending", Reason="", readiness=false. Elapsed: 23.578758ms Mar 16 13:14:12.319: INFO: Pod "busybox-readonly-false-9dbb10da-b9c1-4b5d-b578-053dc20e1c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026956537s Mar 16 13:14:14.323: INFO: Pod "busybox-readonly-false-9dbb10da-b9c1-4b5d-b578-053dc20e1c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030940998s Mar 16 13:14:14.323: INFO: Pod "busybox-readonly-false-9dbb10da-b9c1-4b5d-b578-053dc20e1c61" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:14.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5694" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:14.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-2306 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2306 STEP: Deleting pre-stop pod Mar 16 13:14:27.461: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2306" for this suite. • [SLOW TEST:13.191 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":29,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:27.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:14:28.695: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:14:30.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961268, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961268, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961268, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961268, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:14:33.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:34.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4996" for this suite. STEP: Destroying namespace "webhook-4996-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.566 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":30,"skipped":532,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:34.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:14:34.188: INFO: Waiting up to 5m0s for pod "downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6" in namespace "projected-6348" to be "Succeeded or Failed" Mar 16 13:14:34.191: INFO: Pod "downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248ms Mar 16 13:14:36.195: INFO: Pod "downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007015119s Mar 16 13:14:38.199: INFO: Pod "downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011653905s STEP: Saw pod success Mar 16 13:14:38.199: INFO: Pod "downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6" satisfied condition "Succeeded or Failed" Mar 16 13:14:38.203: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6 container client-container: STEP: delete the pod Mar 16 13:14:38.235: INFO: Waiting for pod downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6 to disappear Mar 16 13:14:38.251: INFO: Pod downwardapi-volume-74a7c7a6-33a4-4f26-9769-85236cdd49d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:38.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6348" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":533,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:38.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:14:38.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6" in namespace "downward-api-2541" to be "Succeeded or Failed" Mar 16 13:14:38.329: INFO: Pod "downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.343604ms Mar 16 13:14:40.332: INFO: Pod "downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017770518s Mar 16 13:14:42.340: INFO: Pod "downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025094596s STEP: Saw pod success Mar 16 13:14:42.340: INFO: Pod "downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6" satisfied condition "Succeeded or Failed" Mar 16 13:14:42.342: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6 container client-container: STEP: delete the pod Mar 16 13:14:42.491: INFO: Waiting for pod downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6 to disappear Mar 16 13:14:42.691: INFO: Pod downwardapi-volume-444a1f28-62f9-438b-ad4f-a2e5a5bd7bc6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:42.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2541" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":547,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:42.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:14:43.307: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5d7e8ec9-1124-4165-94a0-f4a0c8a9622b", Controller:(*bool)(0xc002aaec2a), BlockOwnerDeletion:(*bool)(0xc002aaec2b)}} Mar 16 13:14:43.320: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ea7090be-a7f8-4496-bd28-d076900fb283", Controller:(*bool)(0xc002ea8d32), BlockOwnerDeletion:(*bool)(0xc002ea8d33)}} Mar 16 13:14:43.344: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"86f87429-640e-4589-88c0-1322ff1008d7", Controller:(*bool)(0xc0011a03a2), BlockOwnerDeletion:(*bool)(0xc0011a03a3)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:48.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1462" for this suite. • [SLOW TEST:5.666 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":33,"skipped":559,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:48.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:14:52.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-982" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":34,"skipped":566,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:14:52.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 13:15:02.706: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:02.709: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:04.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:04.713: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:06.710: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:06.713: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:08.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:08.716: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:10.710: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:10.714: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:12.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:12.713: INFO: Pod pod-with-prestop-http-hook still exists Mar 16 13:15:14.710: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 16 13:15:14.713: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:14.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5154" for this suite. • [SLOW TEST:22.146 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":603,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:14.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:15:14.789: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1" in namespace "downward-api-402" to be "Succeeded or Failed" Mar 16 13:15:14.830: INFO: Pod "downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.046539ms Mar 16 13:15:16.834: INFO: Pod "downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04433823s Mar 16 13:15:18.877: INFO: Pod "downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087654033s STEP: Saw pod success Mar 16 13:15:18.877: INFO: Pod "downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1" satisfied condition "Succeeded or Failed" Mar 16 13:15:18.880: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1 container client-container: STEP: delete the pod Mar 16 13:15:18.943: INFO: Waiting for pod downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1 to disappear Mar 16 13:15:18.949: INFO: Pod downwardapi-volume-b36afa68-d349-40ab-a47e-b6f6363e42f1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:18.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-402" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":608,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:18.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:32.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6887" for this suite. • [SLOW TEST:13.290 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":37,"skipped":618,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:32.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0316 13:15:43.763544 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:15:43.763: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:43.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9581" for this suite. • [SLOW TEST:11.517 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":38,"skipped":625,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:43.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 13:15:48.349: INFO: Successfully updated pod "pod-update-8cf67467-c510-496d-bde1-ee9de0820e68" STEP: verifying the updated pod is in kubernetes Mar 16 13:15:48.360: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:48.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2958" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":627,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:48.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:15:54.282: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:15:54.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4315" for this suite. • [SLOW TEST:6.045 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":40,"skipped":664,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:15:54.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 16 13:15:54.534: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 16 13:16:05.492: INFO: >>> kubeConfig: /root/.kube/config Mar 16 13:16:08.418: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:16:19.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3128" for this suite. • [SLOW TEST:24.597 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":41,"skipped":664,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:16:19.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2210 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2210;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2210 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2210;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2210.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2210.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2210.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2210.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2210.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2210.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 18.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.18_udp@PTR;check="$$(dig +tcp +noall +answer +search 18.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.18_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2210 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2210;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2210 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2210;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2210.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2210.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2210.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2210.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2210.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2210.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2210.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2210.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2210.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 18.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.18_udp@PTR;check="$$(dig +tcp +noall +answer +search 18.172.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.172.18_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:16:25.146: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.149: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.154: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.157: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.160: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.163: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.167: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.188: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.192: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.195: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.201: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.205: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.208: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:25.231: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:30.244: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.247: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.250: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.255: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.261: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.263: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.281: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.289: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.292: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.297: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.302: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.305: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:30.322: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:35.236: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.240: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.243: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.246: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.249: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.255: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.258: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.280: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.283: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.287: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.290: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.294: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.297: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.300: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.303: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:35.353: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:40.236: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.239: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.243: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.246: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.250: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.253: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.257: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.260: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.286: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.289: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.291: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.294: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.296: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.298: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.301: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.304: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:40.320: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:45.236: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.239: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.242: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.248: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.251: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.254: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.256: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.322: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.325: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.328: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.331: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.334: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.337: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.340: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.343: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:45.365: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:50.235: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.238: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.243: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.247: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.248: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.250: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.265: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.268: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.270: INFO: Unable to read jessie_udp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.272: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210 from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.275: INFO: Unable to read jessie_udp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.280: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.283: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc from pod dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef: the server could not find the requested resource (get pods dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef) Mar 16 13:16:50.301: INFO: Lookups using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2210 wheezy_tcp@dns-test-service.dns-2210 wheezy_udp@dns-test-service.dns-2210.svc wheezy_tcp@dns-test-service.dns-2210.svc wheezy_udp@_http._tcp.dns-test-service.dns-2210.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2210.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2210 jessie_tcp@dns-test-service.dns-2210 jessie_udp@dns-test-service.dns-2210.svc jessie_tcp@dns-test-service.dns-2210.svc jessie_udp@_http._tcp.dns-test-service.dns-2210.svc jessie_tcp@_http._tcp.dns-test-service.dns-2210.svc] Mar 16 13:16:55.321: INFO: DNS probes using dns-2210/dns-test-04daebf4-2875-42ac-9b34-a20025cb8eef succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:16:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2210" for this suite. • [SLOW TEST:36.822 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":42,"skipped":673,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:16:55.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:16:59.132: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:16:59.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6559" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":675,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:16:59.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:16:59.238: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:17:03.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6075" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":681,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:17:03.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-bb4c99de-5152-4408-a5c5-c3ca58e2999c in namespace container-probe-2680 Mar 16 13:17:07.511: INFO: Started pod liveness-bb4c99de-5152-4408-a5c5-c3ca58e2999c in namespace container-probe-2680 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:17:07.514: INFO: Initial restart count of pod liveness-bb4c99de-5152-4408-a5c5-c3ca58e2999c is 0 Mar 16 13:17:29.562: INFO: Restart count of pod container-probe-2680/liveness-bb4c99de-5152-4408-a5c5-c3ca58e2999c is now 1 (22.048477251s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:17:29.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2680" for this suite. • [SLOW TEST:26.155 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:17:29.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4239 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4239 I0316 13:17:29.789627 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4239, replica count: 2 I0316 13:17:32.840110 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:17:35.840341 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:17:35.840: INFO: Creating new exec pod Mar 16 13:17:42.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4239 execpodc6hxv -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 16 13:17:43.063: INFO: stderr: "I0316 13:17:42.994610 403 log.go:172] (0xc00098c160) (0xc00080e280) Create stream\nI0316 13:17:42.994662 403 log.go:172] (0xc00098c160) (0xc00080e280) Stream added, broadcasting: 1\nI0316 13:17:42.996771 403 log.go:172] (0xc00098c160) Reply frame received for 1\nI0316 13:17:42.996813 403 log.go:172] (0xc00098c160) (0xc0002b3180) Create stream\nI0316 13:17:42.996825 403 log.go:172] (0xc00098c160) (0xc0002b3180) Stream added, broadcasting: 3\nI0316 13:17:42.997966 403 log.go:172] (0xc00098c160) Reply frame received for 3\nI0316 13:17:42.997993 403 log.go:172] (0xc00098c160) (0xc00080e320) Create stream\nI0316 13:17:42.998003 403 log.go:172] (0xc00098c160) (0xc00080e320) Stream added, broadcasting: 5\nI0316 13:17:42.999062 403 log.go:172] (0xc00098c160) Reply frame received for 5\nI0316 13:17:43.056655 403 log.go:172] (0xc00098c160) Data frame received for 5\nI0316 13:17:43.056701 403 log.go:172] (0xc00080e320) (5) Data frame handling\nI0316 13:17:43.056761 403 log.go:172] (0xc00080e320) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0316 13:17:43.056910 403 log.go:172] (0xc00098c160) Data frame received for 5\nI0316 13:17:43.056948 403 log.go:172] (0xc00080e320) (5) Data frame handling\nI0316 13:17:43.056976 403 log.go:172] (0xc00080e320) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0316 13:17:43.057541 403 log.go:172] (0xc00098c160) Data frame received for 3\nI0316 13:17:43.057569 403 log.go:172] (0xc0002b3180) (3) Data frame handling\nI0316 13:17:43.057870 403 log.go:172] (0xc00098c160) Data frame received for 5\nI0316 13:17:43.057899 403 log.go:172] (0xc00080e320) (5) Data frame handling\nI0316 13:17:43.059697 403 log.go:172] (0xc00098c160) Data frame received for 1\nI0316 13:17:43.059717 403 log.go:172] (0xc00080e280) (1) Data frame handling\nI0316 13:17:43.059744 403 log.go:172] (0xc00080e280) (1) Data frame sent\nI0316 13:17:43.059780 403 log.go:172] (0xc00098c160) (0xc00080e280) Stream removed, broadcasting: 1\nI0316 13:17:43.059818 403 log.go:172] (0xc00098c160) Go away received\nI0316 13:17:43.060049 403 log.go:172] (0xc00098c160) (0xc00080e280) Stream removed, broadcasting: 1\nI0316 13:17:43.060063 403 log.go:172] (0xc00098c160) (0xc0002b3180) Stream removed, broadcasting: 3\nI0316 13:17:43.060070 403 log.go:172] (0xc00098c160) (0xc00080e320) Stream removed, broadcasting: 5\n" Mar 16 13:17:43.063: INFO: stdout: "" Mar 16 13:17:43.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4239 execpodc6hxv -- /bin/sh -x -c nc -zv -t -w 2 10.96.144.244 80' Mar 16 13:17:43.273: INFO: stderr: "I0316 13:17:43.196213 426 log.go:172] (0xc000b8ce70) (0xc0008da5a0) Create stream\nI0316 13:17:43.196277 426 log.go:172] (0xc000b8ce70) (0xc0008da5a0) Stream added, broadcasting: 1\nI0316 13:17:43.199099 426 log.go:172] (0xc000b8ce70) Reply frame received for 1\nI0316 13:17:43.199149 426 log.go:172] (0xc000b8ce70) (0xc000a501e0) Create stream\nI0316 13:17:43.199186 426 log.go:172] (0xc000b8ce70) (0xc000a501e0) Stream added, broadcasting: 3\nI0316 13:17:43.200126 426 log.go:172] (0xc000b8ce70) Reply frame received for 3\nI0316 13:17:43.200171 426 log.go:172] (0xc000b8ce70) (0xc0008da640) Create stream\nI0316 13:17:43.200194 426 log.go:172] (0xc000b8ce70) (0xc0008da640) Stream added, broadcasting: 5\nI0316 13:17:43.201001 426 log.go:172] (0xc000b8ce70) Reply frame received for 5\nI0316 13:17:43.268386 426 log.go:172] (0xc000b8ce70) Data frame received for 3\nI0316 13:17:43.268433 426 log.go:172] (0xc000a501e0) (3) Data frame handling\nI0316 13:17:43.268488 426 log.go:172] (0xc000b8ce70) Data frame received for 5\nI0316 13:17:43.268512 426 log.go:172] (0xc0008da640) (5) Data frame handling\nI0316 13:17:43.268534 426 log.go:172] (0xc0008da640) (5) Data frame sent\nI0316 13:17:43.268553 426 log.go:172] (0xc000b8ce70) Data frame received for 5\nI0316 13:17:43.268574 426 log.go:172] (0xc0008da640) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.144.244 80\nConnection to 10.96.144.244 80 port [tcp/http] succeeded!\nI0316 13:17:43.270023 426 log.go:172] (0xc000b8ce70) Data frame received for 1\nI0316 13:17:43.270060 426 log.go:172] (0xc0008da5a0) (1) Data frame handling\nI0316 13:17:43.270081 426 log.go:172] (0xc0008da5a0) (1) Data frame sent\nI0316 13:17:43.270171 426 log.go:172] (0xc000b8ce70) (0xc0008da5a0) Stream removed, broadcasting: 1\nI0316 13:17:43.270587 426 log.go:172] (0xc000b8ce70) Go away received\nI0316 13:17:43.270728 426 log.go:172] (0xc000b8ce70) (0xc0008da5a0) Stream removed, broadcasting: 1\nI0316 13:17:43.270763 426 log.go:172] (0xc000b8ce70) (0xc000a501e0) Stream removed, broadcasting: 3\nI0316 13:17:43.270781 426 log.go:172] (0xc000b8ce70) (0xc0008da640) Stream removed, broadcasting: 5\n" Mar 16 13:17:43.273: INFO: stdout: "" Mar 16 13:17:43.273: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:17:43.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4239" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:13.734 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":46,"skipped":724,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:17:43.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:17:49.605: INFO: Waiting up to 5m0s for pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a" in namespace "pods-8918" to be "Succeeded or Failed" Mar 16 13:17:49.609: INFO: Pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685067ms Mar 16 13:17:51.625: INFO: Pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020472651s Mar 16 13:17:53.630: INFO: Pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024950814s Mar 16 13:17:55.634: INFO: Pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028982564s STEP: Saw pod success Mar 16 13:17:55.634: INFO: Pod "client-envvars-1f717689-640d-420a-9b2e-d50033583f3a" satisfied condition "Succeeded or Failed" Mar 16 13:17:55.637: INFO: Trying to get logs from node latest-worker pod client-envvars-1f717689-640d-420a-9b2e-d50033583f3a container env3cont: STEP: delete the pod Mar 16 13:17:55.671: INFO: Waiting for pod client-envvars-1f717689-640d-420a-9b2e-d50033583f3a to disappear Mar 16 13:17:55.675: INFO: Pod client-envvars-1f717689-640d-420a-9b2e-d50033583f3a no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:17:55.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8918" for this suite. • [SLOW TEST:12.360 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":733,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:17:55.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:17:55.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4903" for this suite. STEP: Destroying namespace "nspatchtest-c10462b6-7b6a-4cc3-a909-34b7270ac005-3816" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":48,"skipped":736,"failed":0} SS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:17:55.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 16 13:17:55.916: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:12.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3393" for this suite. • [SLOW TEST:17.155 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":738,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:13.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:18:13.803: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:18:15.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961493, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961493, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961493, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961493, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:18:18.857: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:19.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7405" for this suite. STEP: Destroying namespace "webhook-7405-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.793 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":50,"skipped":754,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:19.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:24.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4954" for this suite. • [SLOW TEST:5.161 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":769,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:24.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:18:25.056: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c" in namespace "projected-2495" to be "Succeeded or Failed" Mar 16 13:18:25.060: INFO: Pod "downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.86335ms Mar 16 13:18:27.126: INFO: Pod "downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070145636s Mar 16 13:18:29.130: INFO: Pod "downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074444693s STEP: Saw pod success Mar 16 13:18:29.130: INFO: Pod "downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c" satisfied condition "Succeeded or Failed" Mar 16 13:18:29.134: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c container client-container: STEP: delete the pod Mar 16 13:18:29.171: INFO: Waiting for pod downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c to disappear Mar 16 13:18:29.187: INFO: Pod downwardapi-volume-3cbd3db9-5d3f-401b-88f2-dd92f8e20a5c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:29.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2495" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":780,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:29.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:18:29.276: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.038699ms) Mar 16 13:18:29.280: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.141417ms) Mar 16 13:18:29.284: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.14213ms) Mar 16 13:18:29.287: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.393815ms) Mar 16 13:18:29.291: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.381861ms) Mar 16 13:18:29.294: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.608042ms) Mar 16 13:18:29.298: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.779166ms) Mar 16 13:18:29.302: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.699983ms) Mar 16 13:18:29.305: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.232478ms) Mar 16 13:18:29.308: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.18532ms) Mar 16 13:18:29.311: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.183712ms) Mar 16 13:18:29.315: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.210111ms) Mar 16 13:18:29.318: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.356705ms) Mar 16 13:18:29.322: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.934823ms) Mar 16 13:18:29.341: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 19.274898ms) Mar 16 13:18:29.345: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.158575ms) Mar 16 13:18:29.348: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.095533ms) Mar 16 13:18:29.351: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.932663ms) Mar 16 13:18:29.354: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.251722ms) Mar 16 13:18:29.358: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.617032ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:29.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-579" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":275,"completed":53,"skipped":827,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:29.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:18:29.900: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:18:31.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961509, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961509, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961509, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719961509, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:18:35.054: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:18:35.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9421" for this suite. STEP: Destroying namespace "webhook-9421-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.799 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":54,"skipped":855,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:18:35.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-245f4763-3cea-4701-87a3-a40a5f0d6aa5 STEP: Creating configMap with name cm-test-opt-upd-c3cc2b7d-370d-4326-bede-5e0722c6fc11 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-245f4763-3cea-4701-87a3-a40a5f0d6aa5 STEP: Updating configmap cm-test-opt-upd-c3cc2b7d-370d-4326-bede-5e0722c6fc11 STEP: Creating configMap with name cm-test-opt-create-1a03952d-6711-43fa-a2a7-3743e75ef1f9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:20:00.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4054" for this suite. • [SLOW TEST:84.988 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:20:00.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-22bp STEP: Creating a pod to test atomic-volume-subpath Mar 16 13:20:00.220: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-22bp" in namespace "subpath-4829" to be "Succeeded or Failed" Mar 16 13:20:00.224: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681566ms Mar 16 13:20:02.228: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812348s Mar 16 13:20:04.231: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 4.011538819s Mar 16 13:20:06.236: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 6.015593268s Mar 16 13:20:08.301: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 8.081217019s Mar 16 13:20:10.305: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 10.085234045s Mar 16 13:20:12.309: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 12.089435688s Mar 16 13:20:14.313: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 14.092795048s Mar 16 13:20:16.317: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 16.096901029s Mar 16 13:20:18.321: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 18.101006839s Mar 16 13:20:20.325: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 20.105207986s Mar 16 13:20:22.329: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Running", Reason="", readiness=true. Elapsed: 22.109540674s Mar 16 13:20:24.336: INFO: Pod "pod-subpath-test-projected-22bp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.115760027s STEP: Saw pod success Mar 16 13:20:24.336: INFO: Pod "pod-subpath-test-projected-22bp" satisfied condition "Succeeded or Failed" Mar 16 13:20:24.340: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-projected-22bp container test-container-subpath-projected-22bp: STEP: delete the pod Mar 16 13:20:24.369: INFO: Waiting for pod pod-subpath-test-projected-22bp to disappear Mar 16 13:20:24.373: INFO: Pod pod-subpath-test-projected-22bp no longer exists STEP: Deleting pod pod-subpath-test-projected-22bp Mar 16 13:20:24.373: INFO: Deleting pod "pod-subpath-test-projected-22bp" in namespace "subpath-4829" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:20:24.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4829" for this suite. • [SLOW TEST:24.228 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":56,"skipped":891,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:20:24.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-62a85774-7a64-4ee7-8221-ee1d2d698d51 STEP: Creating a pod to test consume configMaps Mar 16 13:20:24.462: INFO: Waiting up to 5m0s for pod "pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc" in namespace "configmap-3831" to be "Succeeded or Failed" Mar 16 13:20:24.481: INFO: Pod "pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.163907ms Mar 16 13:20:26.485: INFO: Pod "pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023469229s Mar 16 13:20:28.489: INFO: Pod "pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027597964s STEP: Saw pod success Mar 16 13:20:28.489: INFO: Pod "pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc" satisfied condition "Succeeded or Failed" Mar 16 13:20:28.492: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc container configmap-volume-test: STEP: delete the pod Mar 16 13:20:28.514: INFO: Waiting for pod pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc to disappear Mar 16 13:20:28.517: INFO: Pod pod-configmaps-ff826392-ff9d-4726-bb32-6c6525bb3dbc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:20:28.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3831" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":899,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:20:28.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 13:20:28.812: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:28.871: INFO: Number of nodes with available pods: 0 Mar 16 13:20:28.871: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:20:29.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:29.880: INFO: Number of nodes with available pods: 0 Mar 16 13:20:29.880: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:20:30.876: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:30.879: INFO: Number of nodes with available pods: 0 Mar 16 13:20:30.879: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:20:31.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:31.880: INFO: Number of nodes with available pods: 0 Mar 16 13:20:31.880: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:20:32.877: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:32.880: INFO: Number of nodes with available pods: 2 Mar 16 13:20:32.880: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 16 13:20:32.896: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:32.899: INFO: Number of nodes with available pods: 1 Mar 16 13:20:32.899: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:33.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:33.907: INFO: Number of nodes with available pods: 1 Mar 16 13:20:33.907: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:34.903: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:34.907: INFO: Number of nodes with available pods: 1 Mar 16 13:20:34.907: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:35.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:35.908: INFO: Number of nodes with available pods: 1 Mar 16 13:20:35.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:36.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:36.908: INFO: Number of nodes with available pods: 1 Mar 16 13:20:36.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:37.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:37.908: INFO: Number of nodes with available pods: 1 Mar 16 13:20:37.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:38.919: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:38.922: INFO: Number of nodes with available pods: 1 Mar 16 13:20:38.922: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:39.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:39.908: INFO: Number of nodes with available pods: 1 Mar 16 13:20:39.908: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 13:20:40.904: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:20:40.908: INFO: Number of nodes with available pods: 2 Mar 16 13:20:40.908: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-27, will wait for the garbage collector to delete the pods Mar 16 13:20:40.974: INFO: Deleting DaemonSet.extensions daemon-set took: 9.567303ms Mar 16 13:20:41.274: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.342328ms Mar 16 13:20:52.778: INFO: Number of nodes with available pods: 0 Mar 16 13:20:52.778: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:20:52.784: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-27/daemonsets","resourceVersion":"270874"},"items":null} Mar 16 13:20:52.787: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-27/pods","resourceVersion":"270874"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:20:52.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-27" for this suite. • [SLOW TEST:24.265 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":58,"skipped":920,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:20:52.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:20:52.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359" in namespace "projected-7124" to be "Succeeded or Failed" Mar 16 13:20:52.913: INFO: Pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359": Phase="Pending", Reason="", readiness=false. Elapsed: 3.7442ms Mar 16 13:20:54.917: INFO: Pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007975374s Mar 16 13:20:56.921: INFO: Pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359": Phase="Running", Reason="", readiness=true. Elapsed: 4.01206095s Mar 16 13:20:58.926: INFO: Pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016445394s STEP: Saw pod success Mar 16 13:20:58.926: INFO: Pod "downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359" satisfied condition "Succeeded or Failed" Mar 16 13:20:58.929: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359 container client-container: STEP: delete the pod Mar 16 13:20:58.947: INFO: Waiting for pod downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359 to disappear Mar 16 13:20:58.949: INFO: Pod downwardapi-volume-a1ea4e3d-e7d5-4f58-959e-469df7540359 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:20:58.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7124" for this suite. • [SLOW TEST:6.148 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:20:58.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:20:59.015: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8965 I0316 13:20:59.038603 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8965, replica count: 1 I0316 13:21:00.089035 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:21:01.089355 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:21:02.089671 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:21:03.089969 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:21:03.216: INFO: Created: latency-svc-nbdz8 Mar 16 13:21:03.232: INFO: Got endpoints: latency-svc-nbdz8 [42.593272ms] Mar 16 13:21:03.270: INFO: Created: latency-svc-jtt6k Mar 16 13:21:03.295: INFO: Got endpoints: latency-svc-jtt6k [62.762748ms] Mar 16 13:21:03.305: INFO: Created: latency-svc-26hxq Mar 16 13:21:03.315: INFO: Got endpoints: latency-svc-26hxq [82.078667ms] Mar 16 13:21:03.339: INFO: Created: latency-svc-m94m4 Mar 16 13:21:03.357: INFO: Got endpoints: latency-svc-m94m4 [124.644517ms] Mar 16 13:21:03.374: INFO: Created: latency-svc-272wd Mar 16 13:21:03.387: INFO: Got endpoints: latency-svc-272wd [154.778739ms] Mar 16 13:21:03.421: INFO: Created: latency-svc-qzvhg Mar 16 13:21:03.441: INFO: Got endpoints: latency-svc-qzvhg [208.055272ms] Mar 16 13:21:03.465: INFO: Created: latency-svc-hvlt2 Mar 16 13:21:03.498: INFO: Got endpoints: latency-svc-hvlt2 [265.12861ms] Mar 16 13:21:03.559: INFO: Created: latency-svc-54wv8 Mar 16 13:21:03.606: INFO: Created: latency-svc-vhpfp Mar 16 13:21:03.606: INFO: Got endpoints: latency-svc-54wv8 [373.513629ms] Mar 16 13:21:03.621: INFO: Got endpoints: latency-svc-vhpfp [388.217839ms] Mar 16 13:21:03.691: INFO: Created: latency-svc-pbr6q Mar 16 13:21:03.699: INFO: Got endpoints: latency-svc-pbr6q [466.54208ms] Mar 16 13:21:03.747: INFO: Created: latency-svc-99t65 Mar 16 13:21:03.829: INFO: Got endpoints: latency-svc-99t65 [596.073938ms] Mar 16 13:21:03.870: INFO: Created: latency-svc-qsb2v Mar 16 13:21:03.885: INFO: Got endpoints: latency-svc-qsb2v [652.199081ms] Mar 16 13:21:03.906: INFO: Created: latency-svc-twnt7 Mar 16 13:21:03.914: INFO: Got endpoints: latency-svc-twnt7 [681.611124ms] Mar 16 13:21:03.960: INFO: Created: latency-svc-jwfnl Mar 16 13:21:03.980: INFO: Created: latency-svc-hwgk9 Mar 16 13:21:03.980: INFO: Got endpoints: latency-svc-jwfnl [747.343174ms] Mar 16 13:21:04.011: INFO: Got endpoints: latency-svc-hwgk9 [777.932585ms] Mar 16 13:21:04.040: INFO: Created: latency-svc-p8rhz Mar 16 13:21:04.058: INFO: Got endpoints: latency-svc-p8rhz [825.214603ms] Mar 16 13:21:04.091: INFO: Created: latency-svc-p6t29 Mar 16 13:21:04.100: INFO: Got endpoints: latency-svc-p6t29 [804.409292ms] Mar 16 13:21:04.139: INFO: Created: latency-svc-czdhn Mar 16 13:21:04.148: INFO: Got endpoints: latency-svc-czdhn [832.925803ms] Mar 16 13:21:04.218: INFO: Created: latency-svc-zhncl Mar 16 13:21:04.238: INFO: Got endpoints: latency-svc-zhncl [880.985411ms] Mar 16 13:21:04.239: INFO: Created: latency-svc-jrnn2 Mar 16 13:21:04.250: INFO: Got endpoints: latency-svc-jrnn2 [862.516392ms] Mar 16 13:21:04.298: INFO: Created: latency-svc-m79gh Mar 16 13:21:04.316: INFO: Got endpoints: latency-svc-m79gh [875.126148ms] Mar 16 13:21:04.355: INFO: Created: latency-svc-kgz5h Mar 16 13:21:04.370: INFO: Got endpoints: latency-svc-kgz5h [871.787968ms] Mar 16 13:21:04.397: INFO: Created: latency-svc-4w76r Mar 16 13:21:04.430: INFO: Got endpoints: latency-svc-4w76r [824.119984ms] Mar 16 13:21:04.497: INFO: Created: latency-svc-h4x9s Mar 16 13:21:04.519: INFO: Got endpoints: latency-svc-h4x9s [898.379034ms] Mar 16 13:21:04.544: INFO: Created: latency-svc-mwvdd Mar 16 13:21:04.555: INFO: Got endpoints: latency-svc-mwvdd [856.272243ms] Mar 16 13:21:04.574: INFO: Created: latency-svc-q6fgx Mar 16 13:21:04.637: INFO: Got endpoints: latency-svc-q6fgx [807.768617ms] Mar 16 13:21:04.661: INFO: Created: latency-svc-qpjtd Mar 16 13:21:04.669: INFO: Got endpoints: latency-svc-qpjtd [783.976153ms] Mar 16 13:21:04.685: INFO: Created: latency-svc-2s665 Mar 16 13:21:04.699: INFO: Got endpoints: latency-svc-2s665 [784.858307ms] Mar 16 13:21:04.719: INFO: Created: latency-svc-t9j6f Mar 16 13:21:04.735: INFO: Got endpoints: latency-svc-t9j6f [754.754491ms] Mar 16 13:21:04.766: INFO: Created: latency-svc-pg8rw Mar 16 13:21:04.796: INFO: Got endpoints: latency-svc-pg8rw [785.503339ms] Mar 16 13:21:04.830: INFO: Created: latency-svc-sz2v4 Mar 16 13:21:04.843: INFO: Got endpoints: latency-svc-sz2v4 [785.101518ms] Mar 16 13:21:04.882: INFO: Created: latency-svc-jvx89 Mar 16 13:21:04.908: INFO: Created: latency-svc-b9rzc Mar 16 13:21:04.908: INFO: Got endpoints: latency-svc-jvx89 [808.410271ms] Mar 16 13:21:04.920: INFO: Got endpoints: latency-svc-b9rzc [772.484377ms] Mar 16 13:21:04.942: INFO: Created: latency-svc-5rk27 Mar 16 13:21:04.956: INFO: Got endpoints: latency-svc-5rk27 [717.842714ms] Mar 16 13:21:04.976: INFO: Created: latency-svc-x4dtg Mar 16 13:21:05.008: INFO: Got endpoints: latency-svc-x4dtg [758.138709ms] Mar 16 13:21:05.018: INFO: Created: latency-svc-29n2n Mar 16 13:21:05.029: INFO: Got endpoints: latency-svc-29n2n [713.075882ms] Mar 16 13:21:05.042: INFO: Created: latency-svc-92mht Mar 16 13:21:05.060: INFO: Got endpoints: latency-svc-92mht [690.302761ms] Mar 16 13:21:05.087: INFO: Created: latency-svc-5992s Mar 16 13:21:05.100: INFO: Got endpoints: latency-svc-5992s [670.243901ms] Mar 16 13:21:05.151: INFO: Created: latency-svc-r75gr Mar 16 13:21:05.170: INFO: Got endpoints: latency-svc-r75gr [650.954851ms] Mar 16 13:21:05.170: INFO: Created: latency-svc-dmvhv Mar 16 13:21:05.184: INFO: Got endpoints: latency-svc-dmvhv [628.510603ms] Mar 16 13:21:05.228: INFO: Created: latency-svc-tt7l8 Mar 16 13:21:05.244: INFO: Got endpoints: latency-svc-tt7l8 [607.725785ms] Mar 16 13:21:05.301: INFO: Created: latency-svc-jmr7z Mar 16 13:21:05.351: INFO: Created: latency-svc-xhr2x Mar 16 13:21:05.351: INFO: Got endpoints: latency-svc-jmr7z [682.191631ms] Mar 16 13:21:05.370: INFO: Got endpoints: latency-svc-xhr2x [671.093251ms] Mar 16 13:21:05.398: INFO: Created: latency-svc-xq9c9 Mar 16 13:21:05.444: INFO: Got endpoints: latency-svc-xq9c9 [708.598066ms] Mar 16 13:21:05.474: INFO: Created: latency-svc-tvcfj Mar 16 13:21:05.490: INFO: Got endpoints: latency-svc-tvcfj [693.611563ms] Mar 16 13:21:05.522: INFO: Created: latency-svc-hxlnr Mar 16 13:21:05.553: INFO: Got endpoints: latency-svc-hxlnr [709.617178ms] Mar 16 13:21:05.566: INFO: Created: latency-svc-md8n6 Mar 16 13:21:05.580: INFO: Got endpoints: latency-svc-md8n6 [671.76515ms] Mar 16 13:21:05.621: INFO: Created: latency-svc-hd2rl Mar 16 13:21:05.634: INFO: Got endpoints: latency-svc-hd2rl [713.746399ms] Mar 16 13:21:05.679: INFO: Created: latency-svc-5mgg7 Mar 16 13:21:05.683: INFO: Got endpoints: latency-svc-5mgg7 [726.990767ms] Mar 16 13:21:05.708: INFO: Created: latency-svc-5tcb5 Mar 16 13:21:05.734: INFO: Got endpoints: latency-svc-5tcb5 [725.745021ms] Mar 16 13:21:05.749: INFO: Created: latency-svc-k4cnq Mar 16 13:21:05.766: INFO: Got endpoints: latency-svc-k4cnq [736.746221ms] Mar 16 13:21:05.804: INFO: Created: latency-svc-4bjqz Mar 16 13:21:05.824: INFO: Got endpoints: latency-svc-4bjqz [764.282815ms] Mar 16 13:21:05.825: INFO: Created: latency-svc-52xx8 Mar 16 13:21:05.837: INFO: Got endpoints: latency-svc-52xx8 [736.914399ms] Mar 16 13:21:05.855: INFO: Created: latency-svc-qtwsx Mar 16 13:21:05.867: INFO: Got endpoints: latency-svc-qtwsx [696.722147ms] Mar 16 13:21:05.884: INFO: Created: latency-svc-fn88x Mar 16 13:21:05.897: INFO: Got endpoints: latency-svc-fn88x [713.298027ms] Mar 16 13:21:05.936: INFO: Created: latency-svc-p5b7b Mar 16 13:21:05.956: INFO: Created: latency-svc-6g7z5 Mar 16 13:21:05.956: INFO: Got endpoints: latency-svc-p5b7b [712.014057ms] Mar 16 13:21:05.969: INFO: Got endpoints: latency-svc-6g7z5 [617.849839ms] Mar 16 13:21:06.020: INFO: Created: latency-svc-4x4sp Mar 16 13:21:06.035: INFO: Got endpoints: latency-svc-4x4sp [665.171985ms] Mar 16 13:21:06.068: INFO: Created: latency-svc-d8477 Mar 16 13:21:06.082: INFO: Got endpoints: latency-svc-d8477 [638.297782ms] Mar 16 13:21:06.082: INFO: Created: latency-svc-2zb2b Mar 16 13:21:06.095: INFO: Got endpoints: latency-svc-2zb2b [605.064192ms] Mar 16 13:21:06.136: INFO: Created: latency-svc-5s7gv Mar 16 13:21:06.156: INFO: Got endpoints: latency-svc-5s7gv [602.996805ms] Mar 16 13:21:06.199: INFO: Created: latency-svc-xrssq Mar 16 13:21:06.217: INFO: Got endpoints: latency-svc-xrssq [637.441048ms] Mar 16 13:21:06.217: INFO: Created: latency-svc-8dkbk Mar 16 13:21:06.228: INFO: Got endpoints: latency-svc-8dkbk [593.449935ms] Mar 16 13:21:06.248: INFO: Created: latency-svc-49rbr Mar 16 13:21:06.257: INFO: Got endpoints: latency-svc-49rbr [573.197335ms] Mar 16 13:21:06.289: INFO: Created: latency-svc-7g72r Mar 16 13:21:06.337: INFO: Got endpoints: latency-svc-7g72r [603.546845ms] Mar 16 13:21:06.346: INFO: Created: latency-svc-vvj5f Mar 16 13:21:06.365: INFO: Got endpoints: latency-svc-vvj5f [598.854824ms] Mar 16 13:21:06.394: INFO: Created: latency-svc-nzdmk Mar 16 13:21:06.406: INFO: Got endpoints: latency-svc-nzdmk [581.982579ms] Mar 16 13:21:06.427: INFO: Created: latency-svc-6cpl9 Mar 16 13:21:06.487: INFO: Got endpoints: latency-svc-6cpl9 [649.435724ms] Mar 16 13:21:06.488: INFO: Created: latency-svc-b95dg Mar 16 13:21:06.496: INFO: Got endpoints: latency-svc-b95dg [628.746353ms] Mar 16 13:21:06.517: INFO: Created: latency-svc-tgmpp Mar 16 13:21:06.532: INFO: Got endpoints: latency-svc-tgmpp [635.084909ms] Mar 16 13:21:06.550: INFO: Created: latency-svc-l5z2z Mar 16 13:21:06.562: INFO: Got endpoints: latency-svc-l5z2z [605.854654ms] Mar 16 13:21:06.580: INFO: Created: latency-svc-gjsxm Mar 16 13:21:06.624: INFO: Got endpoints: latency-svc-gjsxm [655.461722ms] Mar 16 13:21:06.640: INFO: Created: latency-svc-2nj6k Mar 16 13:21:06.652: INFO: Got endpoints: latency-svc-2nj6k [616.991246ms] Mar 16 13:21:06.706: INFO: Created: latency-svc-l8ptb Mar 16 13:21:06.712: INFO: Got endpoints: latency-svc-l8ptb [629.962162ms] Mar 16 13:21:06.778: INFO: Created: latency-svc-tl9x2 Mar 16 13:21:06.790: INFO: Got endpoints: latency-svc-tl9x2 [694.796145ms] Mar 16 13:21:06.814: INFO: Created: latency-svc-kdvnn Mar 16 13:21:06.826: INFO: Got endpoints: latency-svc-kdvnn [670.098522ms] Mar 16 13:21:06.851: INFO: Created: latency-svc-jmrwx Mar 16 13:21:06.918: INFO: Got endpoints: latency-svc-jmrwx [700.561398ms] Mar 16 13:21:06.919: INFO: Created: latency-svc-6f6px Mar 16 13:21:06.927: INFO: Got endpoints: latency-svc-6f6px [699.784455ms] Mar 16 13:21:06.943: INFO: Created: latency-svc-bjwgv Mar 16 13:21:06.952: INFO: Got endpoints: latency-svc-bjwgv [695.012192ms] Mar 16 13:21:06.967: INFO: Created: latency-svc-94htt Mar 16 13:21:06.975: INFO: Got endpoints: latency-svc-94htt [637.966835ms] Mar 16 13:21:06.997: INFO: Created: latency-svc-hx7xg Mar 16 13:21:07.055: INFO: Got endpoints: latency-svc-hx7xg [690.858089ms] Mar 16 13:21:07.072: INFO: Created: latency-svc-znkkv Mar 16 13:21:07.084: INFO: Got endpoints: latency-svc-znkkv [677.095085ms] Mar 16 13:21:07.102: INFO: Created: latency-svc-w5fgs Mar 16 13:21:07.114: INFO: Got endpoints: latency-svc-w5fgs [626.867307ms] Mar 16 13:21:07.132: INFO: Created: latency-svc-zxnz7 Mar 16 13:21:07.144: INFO: Got endpoints: latency-svc-zxnz7 [647.566209ms] Mar 16 13:21:07.181: INFO: Created: latency-svc-4cqbk Mar 16 13:21:07.186: INFO: Got endpoints: latency-svc-4cqbk [653.012605ms] Mar 16 13:21:07.207: INFO: Created: latency-svc-82jxv Mar 16 13:21:07.222: INFO: Got endpoints: latency-svc-82jxv [659.561218ms] Mar 16 13:21:07.243: INFO: Created: latency-svc-w68wz Mar 16 13:21:07.258: INFO: Got endpoints: latency-svc-w68wz [633.263776ms] Mar 16 13:21:07.337: INFO: Created: latency-svc-lmsxf Mar 16 13:21:07.370: INFO: Created: latency-svc-q6lw4 Mar 16 13:21:07.370: INFO: Got endpoints: latency-svc-lmsxf [717.641696ms] Mar 16 13:21:07.383: INFO: Got endpoints: latency-svc-q6lw4 [671.08584ms] Mar 16 13:21:07.399: INFO: Created: latency-svc-4fqt5 Mar 16 13:21:07.413: INFO: Got endpoints: latency-svc-4fqt5 [623.467371ms] Mar 16 13:21:07.429: INFO: Created: latency-svc-6562m Mar 16 13:21:07.462: INFO: Got endpoints: latency-svc-6562m [636.409653ms] Mar 16 13:21:07.492: INFO: Created: latency-svc-x6hcc Mar 16 13:21:07.509: INFO: Got endpoints: latency-svc-x6hcc [591.268103ms] Mar 16 13:21:07.527: INFO: Created: latency-svc-7z9h6 Mar 16 13:21:07.545: INFO: Got endpoints: latency-svc-7z9h6 [617.365069ms] Mar 16 13:21:07.607: INFO: Created: latency-svc-hcpwd Mar 16 13:21:07.611: INFO: Got endpoints: latency-svc-hcpwd [658.847393ms] Mar 16 13:21:07.645: INFO: Created: latency-svc-92whc Mar 16 13:21:07.675: INFO: Got endpoints: latency-svc-92whc [699.699482ms] Mar 16 13:21:07.699: INFO: Created: latency-svc-k2jqz Mar 16 13:21:07.750: INFO: Got endpoints: latency-svc-k2jqz [695.022995ms] Mar 16 13:21:07.752: INFO: Created: latency-svc-8s55l Mar 16 13:21:07.761: INFO: Got endpoints: latency-svc-8s55l [676.999203ms] Mar 16 13:21:07.780: INFO: Created: latency-svc-ljq9f Mar 16 13:21:07.791: INFO: Got endpoints: latency-svc-ljq9f [676.608834ms] Mar 16 13:21:07.804: INFO: Created: latency-svc-cw8ph Mar 16 13:21:07.815: INFO: Got endpoints: latency-svc-cw8ph [671.025813ms] Mar 16 13:21:07.831: INFO: Created: latency-svc-x8fnp Mar 16 13:21:07.845: INFO: Got endpoints: latency-svc-x8fnp [658.956333ms] Mar 16 13:21:07.888: INFO: Created: latency-svc-j66p8 Mar 16 13:21:07.910: INFO: Created: latency-svc-xr9fk Mar 16 13:21:07.910: INFO: Got endpoints: latency-svc-j66p8 [687.913992ms] Mar 16 13:21:07.933: INFO: Got endpoints: latency-svc-xr9fk [675.500673ms] Mar 16 13:21:07.960: INFO: Created: latency-svc-vgh6b Mar 16 13:21:07.979: INFO: Got endpoints: latency-svc-vgh6b [608.72533ms] Mar 16 13:21:08.014: INFO: Created: latency-svc-2bhk9 Mar 16 13:21:08.031: INFO: Got endpoints: latency-svc-2bhk9 [648.210027ms] Mar 16 13:21:08.032: INFO: Created: latency-svc-4mggc Mar 16 13:21:08.048: INFO: Got endpoints: latency-svc-4mggc [634.80285ms] Mar 16 13:21:08.067: INFO: Created: latency-svc-wtfwz Mar 16 13:21:08.084: INFO: Got endpoints: latency-svc-wtfwz [622.017619ms] Mar 16 13:21:08.102: INFO: Created: latency-svc-dxttq Mar 16 13:21:08.157: INFO: Got endpoints: latency-svc-dxttq [648.070212ms] Mar 16 13:21:08.159: INFO: Created: latency-svc-kmhbn Mar 16 13:21:08.162: INFO: Got endpoints: latency-svc-kmhbn [617.110839ms] Mar 16 13:21:08.185: INFO: Created: latency-svc-4bd5w Mar 16 13:21:08.223: INFO: Got endpoints: latency-svc-4bd5w [612.645261ms] Mar 16 13:21:08.254: INFO: Created: latency-svc-4x26v Mar 16 13:21:08.307: INFO: Got endpoints: latency-svc-4x26v [632.268726ms] Mar 16 13:21:08.310: INFO: Created: latency-svc-h26m2 Mar 16 13:21:08.328: INFO: Got endpoints: latency-svc-h26m2 [577.931232ms] Mar 16 13:21:08.329: INFO: Created: latency-svc-tl972 Mar 16 13:21:08.352: INFO: Got endpoints: latency-svc-tl972 [591.885892ms] Mar 16 13:21:08.377: INFO: Created: latency-svc-62jlv Mar 16 13:21:08.390: INFO: Got endpoints: latency-svc-62jlv [599.609357ms] Mar 16 13:21:08.463: INFO: Created: latency-svc-mrl6l Mar 16 13:21:08.492: INFO: Got endpoints: latency-svc-mrl6l [677.752921ms] Mar 16 13:21:08.515: INFO: Created: latency-svc-xskq9 Mar 16 13:21:08.528: INFO: Got endpoints: latency-svc-xskq9 [683.306025ms] Mar 16 13:21:08.544: INFO: Created: latency-svc-49zfn Mar 16 13:21:08.576: INFO: Got endpoints: latency-svc-49zfn [666.436836ms] Mar 16 13:21:08.581: INFO: Created: latency-svc-l2rhj Mar 16 13:21:08.593: INFO: Got endpoints: latency-svc-l2rhj [659.994569ms] Mar 16 13:21:08.610: INFO: Created: latency-svc-8sxjt Mar 16 13:21:08.623: INFO: Got endpoints: latency-svc-8sxjt [644.310503ms] Mar 16 13:21:08.643: INFO: Created: latency-svc-tw4qn Mar 16 13:21:08.659: INFO: Got endpoints: latency-svc-tw4qn [627.893085ms] Mar 16 13:21:08.720: INFO: Created: latency-svc-7hs46 Mar 16 13:21:08.734: INFO: Got endpoints: latency-svc-7hs46 [685.354339ms] Mar 16 13:21:08.770: INFO: Created: latency-svc-vvz7r Mar 16 13:21:08.785: INFO: Got endpoints: latency-svc-vvz7r [700.831665ms] Mar 16 13:21:08.809: INFO: Created: latency-svc-qvxdz Mar 16 13:21:08.864: INFO: Got endpoints: latency-svc-qvxdz [706.520051ms] Mar 16 13:21:08.866: INFO: Created: latency-svc-6xpvx Mar 16 13:21:08.875: INFO: Got endpoints: latency-svc-6xpvx [712.804983ms] Mar 16 13:21:08.893: INFO: Created: latency-svc-tdrg4 Mar 16 13:21:08.905: INFO: Got endpoints: latency-svc-tdrg4 [682.027437ms] Mar 16 13:21:08.925: INFO: Created: latency-svc-s8g72 Mar 16 13:21:08.941: INFO: Got endpoints: latency-svc-s8g72 [633.64916ms] Mar 16 13:21:08.955: INFO: Created: latency-svc-zjbbb Mar 16 13:21:08.978: INFO: Got endpoints: latency-svc-zjbbb [649.094287ms] Mar 16 13:21:08.991: INFO: Created: latency-svc-rm5ws Mar 16 13:21:09.013: INFO: Got endpoints: latency-svc-rm5ws [660.605583ms] Mar 16 13:21:09.037: INFO: Created: latency-svc-fmg2k Mar 16 13:21:09.302: INFO: Got endpoints: latency-svc-fmg2k [911.688643ms] Mar 16 13:21:09.315: INFO: Created: latency-svc-plzrf Mar 16 13:21:09.331: INFO: Got endpoints: latency-svc-plzrf [838.182011ms] Mar 16 13:21:09.369: INFO: Created: latency-svc-8qwdz Mar 16 13:21:09.391: INFO: Got endpoints: latency-svc-8qwdz [862.608281ms] Mar 16 13:21:09.539: INFO: Created: latency-svc-n9qd6 Mar 16 13:21:09.564: INFO: Created: latency-svc-l89gd Mar 16 13:21:09.564: INFO: Got endpoints: latency-svc-n9qd6 [987.732051ms] Mar 16 13:21:09.576: INFO: Got endpoints: latency-svc-l89gd [982.874549ms] Mar 16 13:21:09.594: INFO: Created: latency-svc-l76r9 Mar 16 13:21:09.605: INFO: Got endpoints: latency-svc-l76r9 [982.3061ms] Mar 16 13:21:09.673: INFO: Created: latency-svc-tg5d8 Mar 16 13:21:09.705: INFO: Created: latency-svc-2v7vd Mar 16 13:21:09.705: INFO: Got endpoints: latency-svc-tg5d8 [1.045515546s] Mar 16 13:21:09.720: INFO: Got endpoints: latency-svc-2v7vd [986.197742ms] Mar 16 13:21:09.741: INFO: Created: latency-svc-z2987 Mar 16 13:21:09.756: INFO: Got endpoints: latency-svc-z2987 [970.222451ms] Mar 16 13:21:09.799: INFO: Created: latency-svc-j64zt Mar 16 13:21:09.816: INFO: Got endpoints: latency-svc-j64zt [952.109461ms] Mar 16 13:21:09.846: INFO: Created: latency-svc-zwrwm Mar 16 13:21:09.864: INFO: Got endpoints: latency-svc-zwrwm [988.897699ms] Mar 16 13:21:09.885: INFO: Created: latency-svc-8tkr4 Mar 16 13:21:09.912: INFO: Got endpoints: latency-svc-8tkr4 [1.006414179s] Mar 16 13:21:09.927: INFO: Created: latency-svc-lpp8n Mar 16 13:21:09.945: INFO: Got endpoints: latency-svc-lpp8n [1.003729635s] Mar 16 13:21:09.963: INFO: Created: latency-svc-qh4hn Mar 16 13:21:09.971: INFO: Got endpoints: latency-svc-qh4hn [993.624655ms] Mar 16 13:21:09.990: INFO: Created: latency-svc-whtc8 Mar 16 13:21:10.002: INFO: Got endpoints: latency-svc-whtc8 [988.481584ms] Mar 16 13:21:10.050: INFO: Created: latency-svc-vvwmh Mar 16 13:21:10.068: INFO: Got endpoints: latency-svc-vvwmh [765.729713ms] Mar 16 13:21:10.068: INFO: Created: latency-svc-nmkbm Mar 16 13:21:10.079: INFO: Got endpoints: latency-svc-nmkbm [748.152141ms] Mar 16 13:21:10.091: INFO: Created: latency-svc-6xrbf Mar 16 13:21:10.103: INFO: Got endpoints: latency-svc-6xrbf [712.384684ms] Mar 16 13:21:10.124: INFO: Created: latency-svc-zsz2n Mar 16 13:21:10.139: INFO: Got endpoints: latency-svc-zsz2n [574.9476ms] Mar 16 13:21:10.182: INFO: Created: latency-svc-hzx4m Mar 16 13:21:10.187: INFO: Got endpoints: latency-svc-hzx4m [611.237886ms] Mar 16 13:21:10.220: INFO: Created: latency-svc-t9njm Mar 16 13:21:10.235: INFO: Got endpoints: latency-svc-t9njm [629.893672ms] Mar 16 13:21:10.254: INFO: Created: latency-svc-pg8dw Mar 16 13:21:10.271: INFO: Got endpoints: latency-svc-pg8dw [566.490027ms] Mar 16 13:21:10.307: INFO: Created: latency-svc-fs5rl Mar 16 13:21:10.326: INFO: Got endpoints: latency-svc-fs5rl [605.987652ms] Mar 16 13:21:10.327: INFO: Created: latency-svc-2nzgm Mar 16 13:21:10.343: INFO: Got endpoints: latency-svc-2nzgm [587.38634ms] Mar 16 13:21:10.371: INFO: Created: latency-svc-c5n2j Mar 16 13:21:10.391: INFO: Got endpoints: latency-svc-c5n2j [574.398114ms] Mar 16 13:21:10.451: INFO: Created: latency-svc-s2mn9 Mar 16 13:21:10.475: INFO: Created: latency-svc-wsphm Mar 16 13:21:10.475: INFO: Got endpoints: latency-svc-s2mn9 [611.554978ms] Mar 16 13:21:10.493: INFO: Got endpoints: latency-svc-wsphm [580.805731ms] Mar 16 13:21:10.512: INFO: Created: latency-svc-4bn7b Mar 16 13:21:10.588: INFO: Got endpoints: latency-svc-4bn7b [643.494553ms] Mar 16 13:21:10.591: INFO: Created: latency-svc-cnx8r Mar 16 13:21:10.607: INFO: Got endpoints: latency-svc-cnx8r [635.376798ms] Mar 16 13:21:10.628: INFO: Created: latency-svc-jf8k6 Mar 16 13:21:10.643: INFO: Got endpoints: latency-svc-jf8k6 [640.903062ms] Mar 16 13:21:10.665: INFO: Created: latency-svc-8hpgr Mar 16 13:21:10.681: INFO: Got endpoints: latency-svc-8hpgr [613.461145ms] Mar 16 13:21:10.720: INFO: Created: latency-svc-dk6kd Mar 16 13:21:10.745: INFO: Got endpoints: latency-svc-dk6kd [666.502264ms] Mar 16 13:21:10.746: INFO: Created: latency-svc-bwxfb Mar 16 13:21:10.782: INFO: Got endpoints: latency-svc-bwxfb [678.691674ms] Mar 16 13:21:10.805: INFO: Created: latency-svc-gzffq Mar 16 13:21:10.816: INFO: Got endpoints: latency-svc-gzffq [677.166179ms] Mar 16 13:21:10.842: INFO: Created: latency-svc-7ljxk Mar 16 13:21:10.852: INFO: Got endpoints: latency-svc-7ljxk [664.778709ms] Mar 16 13:21:10.874: INFO: Created: latency-svc-bwprl Mar 16 13:21:10.905: INFO: Got endpoints: latency-svc-bwprl [669.121597ms] Mar 16 13:21:10.934: INFO: Created: latency-svc-f66dm Mar 16 13:21:10.960: INFO: Got endpoints: latency-svc-f66dm [688.430559ms] Mar 16 13:21:10.973: INFO: Created: latency-svc-qqgwg Mar 16 13:21:10.990: INFO: Got endpoints: latency-svc-qqgwg [664.110793ms] Mar 16 13:21:11.009: INFO: Created: latency-svc-8j5tx Mar 16 13:21:11.040: INFO: Got endpoints: latency-svc-8j5tx [696.456218ms] Mar 16 13:21:11.086: INFO: Created: latency-svc-thrzl Mar 16 13:21:11.102: INFO: Created: latency-svc-8w2lr Mar 16 13:21:11.102: INFO: Got endpoints: latency-svc-thrzl [711.510068ms] Mar 16 13:21:11.116: INFO: Got endpoints: latency-svc-8w2lr [640.267045ms] Mar 16 13:21:11.132: INFO: Created: latency-svc-lmsg7 Mar 16 13:21:11.146: INFO: Got endpoints: latency-svc-lmsg7 [653.486279ms] Mar 16 13:21:11.162: INFO: Created: latency-svc-64247 Mar 16 13:21:11.176: INFO: Got endpoints: latency-svc-64247 [587.49442ms] Mar 16 13:21:11.217: INFO: Created: latency-svc-9g74t Mar 16 13:21:11.238: INFO: Created: latency-svc-n9szn Mar 16 13:21:11.238: INFO: Got endpoints: latency-svc-9g74t [631.454275ms] Mar 16 13:21:11.250: INFO: Got endpoints: latency-svc-n9szn [606.984634ms] Mar 16 13:21:11.261: INFO: Created: latency-svc-db5mf Mar 16 13:21:11.278: INFO: Got endpoints: latency-svc-db5mf [596.313805ms] Mar 16 13:21:11.291: INFO: Created: latency-svc-hbbr7 Mar 16 13:21:11.343: INFO: Got endpoints: latency-svc-hbbr7 [597.541168ms] Mar 16 13:21:11.366: INFO: Created: latency-svc-92lkd Mar 16 13:21:11.386: INFO: Got endpoints: latency-svc-92lkd [604.166091ms] Mar 16 13:21:11.402: INFO: Created: latency-svc-x7cbl Mar 16 13:21:11.415: INFO: Got endpoints: latency-svc-x7cbl [599.048786ms] Mar 16 13:21:11.432: INFO: Created: latency-svc-xsfxf Mar 16 13:21:11.463: INFO: Got endpoints: latency-svc-xsfxf [610.387772ms] Mar 16 13:21:11.471: INFO: Created: latency-svc-rs6s9 Mar 16 13:21:11.487: INFO: Got endpoints: latency-svc-rs6s9 [582.521611ms] Mar 16 13:21:11.514: INFO: Created: latency-svc-868rp Mar 16 13:21:11.543: INFO: Got endpoints: latency-svc-868rp [583.078765ms] Mar 16 13:21:11.588: INFO: Created: latency-svc-d6wf6 Mar 16 13:21:11.595: INFO: Got endpoints: latency-svc-d6wf6 [604.932526ms] Mar 16 13:21:11.636: INFO: Created: latency-svc-5zmgx Mar 16 13:21:11.649: INFO: Got endpoints: latency-svc-5zmgx [609.524592ms] Mar 16 13:21:11.679: INFO: Created: latency-svc-s2pj8 Mar 16 13:21:11.732: INFO: Got endpoints: latency-svc-s2pj8 [630.207417ms] Mar 16 13:21:11.747: INFO: Created: latency-svc-94mqt Mar 16 13:21:11.763: INFO: Got endpoints: latency-svc-94mqt [647.215444ms] Mar 16 13:21:11.795: INFO: Created: latency-svc-pbm7b Mar 16 13:21:11.811: INFO: Got endpoints: latency-svc-pbm7b [664.628862ms] Mar 16 13:21:11.828: INFO: Created: latency-svc-mldmp Mar 16 13:21:11.894: INFO: Got endpoints: latency-svc-mldmp [718.458759ms] Mar 16 13:21:11.896: INFO: Created: latency-svc-c6kvc Mar 16 13:21:11.900: INFO: Got endpoints: latency-svc-c6kvc [662.167964ms] Mar 16 13:21:11.918: INFO: Created: latency-svc-w9dsw Mar 16 13:21:11.931: INFO: Got endpoints: latency-svc-w9dsw [681.18449ms] Mar 16 13:21:11.951: INFO: Created: latency-svc-4ckc7 Mar 16 13:21:11.967: INFO: Got endpoints: latency-svc-4ckc7 [689.261474ms] Mar 16 13:21:11.984: INFO: Created: latency-svc-p4xxz Mar 16 13:21:11.991: INFO: Got endpoints: latency-svc-p4xxz [647.742056ms] Mar 16 13:21:12.032: INFO: Created: latency-svc-ttpr6 Mar 16 13:21:12.038: INFO: Got endpoints: latency-svc-ttpr6 [652.271867ms] Mar 16 13:21:12.054: INFO: Created: latency-svc-449sp Mar 16 13:21:12.063: INFO: Got endpoints: latency-svc-449sp [647.107088ms] Mar 16 13:21:12.080: INFO: Created: latency-svc-pk9sc Mar 16 13:21:12.092: INFO: Got endpoints: latency-svc-pk9sc [629.661966ms] Mar 16 13:21:12.110: INFO: Created: latency-svc-gd7hr Mar 16 13:21:12.122: INFO: Got endpoints: latency-svc-gd7hr [635.031467ms] Mar 16 13:21:12.152: INFO: Created: latency-svc-fnbt6 Mar 16 13:21:12.173: INFO: Created: latency-svc-5jt9s Mar 16 13:21:12.174: INFO: Got endpoints: latency-svc-fnbt6 [631.049203ms] Mar 16 13:21:12.209: INFO: Got endpoints: latency-svc-5jt9s [614.297335ms] Mar 16 13:21:12.245: INFO: Created: latency-svc-kkxg7 Mar 16 13:21:12.271: INFO: Got endpoints: latency-svc-kkxg7 [622.010932ms] Mar 16 13:21:12.284: INFO: Created: latency-svc-7j9c8 Mar 16 13:21:12.302: INFO: Got endpoints: latency-svc-7j9c8 [570.040663ms] Mar 16 13:21:12.326: INFO: Created: latency-svc-t824g Mar 16 13:21:12.350: INFO: Got endpoints: latency-svc-t824g [586.876503ms] Mar 16 13:21:12.403: INFO: Created: latency-svc-htw8t Mar 16 13:21:12.427: INFO: Created: latency-svc-478hl Mar 16 13:21:12.427: INFO: Got endpoints: latency-svc-htw8t [615.579824ms] Mar 16 13:21:12.440: INFO: Got endpoints: latency-svc-478hl [545.405194ms] Mar 16 13:21:12.491: INFO: Created: latency-svc-dt448 Mar 16 13:21:12.553: INFO: Got endpoints: latency-svc-dt448 [652.331412ms] Mar 16 13:21:12.553: INFO: Latencies: [62.762748ms 82.078667ms 124.644517ms 154.778739ms 208.055272ms 265.12861ms 373.513629ms 388.217839ms 466.54208ms 545.405194ms 566.490027ms 570.040663ms 573.197335ms 574.398114ms 574.9476ms 577.931232ms 580.805731ms 581.982579ms 582.521611ms 583.078765ms 586.876503ms 587.38634ms 587.49442ms 591.268103ms 591.885892ms 593.449935ms 596.073938ms 596.313805ms 597.541168ms 598.854824ms 599.048786ms 599.609357ms 602.996805ms 603.546845ms 604.166091ms 604.932526ms 605.064192ms 605.854654ms 605.987652ms 606.984634ms 607.725785ms 608.72533ms 609.524592ms 610.387772ms 611.237886ms 611.554978ms 612.645261ms 613.461145ms 614.297335ms 615.579824ms 616.991246ms 617.110839ms 617.365069ms 617.849839ms 622.010932ms 622.017619ms 623.467371ms 626.867307ms 627.893085ms 628.510603ms 628.746353ms 629.661966ms 629.893672ms 629.962162ms 630.207417ms 631.049203ms 631.454275ms 632.268726ms 633.263776ms 633.64916ms 634.80285ms 635.031467ms 635.084909ms 635.376798ms 636.409653ms 637.441048ms 637.966835ms 638.297782ms 640.267045ms 640.903062ms 643.494553ms 644.310503ms 647.107088ms 647.215444ms 647.566209ms 647.742056ms 648.070212ms 648.210027ms 649.094287ms 649.435724ms 650.954851ms 652.199081ms 652.271867ms 652.331412ms 653.012605ms 653.486279ms 655.461722ms 658.847393ms 658.956333ms 659.561218ms 659.994569ms 660.605583ms 662.167964ms 664.110793ms 664.628862ms 664.778709ms 665.171985ms 666.436836ms 666.502264ms 669.121597ms 670.098522ms 670.243901ms 671.025813ms 671.08584ms 671.093251ms 671.76515ms 675.500673ms 676.608834ms 676.999203ms 677.095085ms 677.166179ms 677.752921ms 678.691674ms 681.18449ms 681.611124ms 682.027437ms 682.191631ms 683.306025ms 685.354339ms 687.913992ms 688.430559ms 689.261474ms 690.302761ms 690.858089ms 693.611563ms 694.796145ms 695.012192ms 695.022995ms 696.456218ms 696.722147ms 699.699482ms 699.784455ms 700.561398ms 700.831665ms 706.520051ms 708.598066ms 709.617178ms 711.510068ms 712.014057ms 712.384684ms 712.804983ms 713.075882ms 713.298027ms 713.746399ms 717.641696ms 717.842714ms 718.458759ms 725.745021ms 726.990767ms 736.746221ms 736.914399ms 747.343174ms 748.152141ms 754.754491ms 758.138709ms 764.282815ms 765.729713ms 772.484377ms 777.932585ms 783.976153ms 784.858307ms 785.101518ms 785.503339ms 804.409292ms 807.768617ms 808.410271ms 824.119984ms 825.214603ms 832.925803ms 838.182011ms 856.272243ms 862.516392ms 862.608281ms 871.787968ms 875.126148ms 880.985411ms 898.379034ms 911.688643ms 952.109461ms 970.222451ms 982.3061ms 982.874549ms 986.197742ms 987.732051ms 988.481584ms 988.897699ms 993.624655ms 1.003729635s 1.006414179s 1.045515546s] Mar 16 13:21:12.553: INFO: 50 %ile: 659.994569ms Mar 16 13:21:12.553: INFO: 90 %ile: 856.272243ms Mar 16 13:21:12.553: INFO: 99 %ile: 1.006414179s Mar 16 13:21:12.553: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:21:12.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8965" for this suite. • [SLOW TEST:13.616 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":60,"skipped":980,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:21:12.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-95021269-cadd-4fe8-875c-4bb201cd7b85 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-95021269-cadd-4fe8-875c-4bb201cd7b85 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:22:37.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1137" for this suite. • [SLOW TEST:85.257 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1001,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:22:37.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:22:37.957: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec" in namespace "security-context-test-4033" to be "Succeeded or Failed" Mar 16 13:22:37.961: INFO: Pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075404ms Mar 16 13:22:39.965: INFO: Pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008027962s Mar 16 13:22:41.968: INFO: Pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011442242s Mar 16 13:22:41.968: INFO: Pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec" satisfied condition "Succeeded or Failed" Mar 16 13:22:41.989: INFO: Got logs for pod "busybox-privileged-false-c788279a-6ce7-4ab0-94aa-b206743732ec": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:22:41.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4033" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":62,"skipped":1014,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:22:41.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-193e940a-de19-4f2e-b602-1ecf0260e236 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-193e940a-de19-4f2e-b602-1ecf0260e236 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:22:50.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7590" for this suite. • [SLOW TEST:8.156 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1020,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:22:50.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:22:50.263: INFO: Creating ReplicaSet my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1 Mar 16 13:22:50.284: INFO: Pod name my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1: Found 0 pods out of 1 Mar 16 13:22:55.287: INFO: Pod name my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1: Found 1 pods out of 1 Mar 16 13:22:55.287: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1" is running Mar 16 13:22:55.290: INFO: Pod "my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1-7fzvk" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:22:50 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:22:52 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:22:52 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:22:50 +0000 UTC Reason: Message:}]) Mar 16 13:22:55.290: INFO: Trying to dial the pod Mar 16 13:23:00.301: INFO: Controller my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1: Got expected result from replica 1 [my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1-7fzvk]: "my-hostname-basic-0023baf5-2247-4ed4-a778-a261a70473b1-7fzvk", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:23:00.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6146" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":64,"skipped":1034,"failed":0} [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:23:00.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:23:00.451: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 16 13:23:00.460: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:00.479: INFO: Number of nodes with available pods: 0 Mar 16 13:23:00.479: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:01.485: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:01.488: INFO: Number of nodes with available pods: 0 Mar 16 13:23:01.488: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:02.502: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:02.505: INFO: Number of nodes with available pods: 0 Mar 16 13:23:02.505: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:03.483: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:03.487: INFO: Number of nodes with available pods: 0 Mar 16 13:23:03.487: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:04.483: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:04.487: INFO: Number of nodes with available pods: 2 Mar 16 13:23:04.487: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 16 13:23:04.543: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:04.543: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:04.559: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:05.563: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:05.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:05.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:06.564: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:06.564: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:06.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:07.564: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:07.564: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:07.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:08.563: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:08.563: INFO: Pod daemon-set-94dw5 is not available Mar 16 13:23:08.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:08.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:09.563: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:09.563: INFO: Pod daemon-set-94dw5 is not available Mar 16 13:23:09.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:09.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:10.562: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:10.562: INFO: Pod daemon-set-94dw5 is not available Mar 16 13:23:10.562: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:10.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:11.563: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:11.564: INFO: Pod daemon-set-94dw5 is not available Mar 16 13:23:11.564: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:11.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:12.563: INFO: Wrong image for pod: daemon-set-94dw5. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:12.563: INFO: Pod daemon-set-94dw5 is not available Mar 16 13:23:12.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:12.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:13.564: INFO: Pod daemon-set-6q9hx is not available Mar 16 13:23:13.564: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:13.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:14.563: INFO: Pod daemon-set-6q9hx is not available Mar 16 13:23:14.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:14.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:15.563: INFO: Pod daemon-set-6q9hx is not available Mar 16 13:23:15.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:15.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:16.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:16.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:17.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:17.563: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:17.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:18.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:18.563: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:18.567: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:19.564: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:19.564: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:19.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:20.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:20.563: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:20.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:21.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:21.563: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:21.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:22.563: INFO: Wrong image for pod: daemon-set-fn6sh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Mar 16 13:23:22.563: INFO: Pod daemon-set-fn6sh is not available Mar 16 13:23:22.566: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:23.563: INFO: Pod daemon-set-xrdbw is not available Mar 16 13:23:23.568: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 16 13:23:23.572: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:23.575: INFO: Number of nodes with available pods: 1 Mar 16 13:23:23.575: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:24.579: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:24.583: INFO: Number of nodes with available pods: 1 Mar 16 13:23:24.583: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:25.724: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:25.771: INFO: Number of nodes with available pods: 1 Mar 16 13:23:25.771: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:26.580: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:26.584: INFO: Number of nodes with available pods: 1 Mar 16 13:23:26.584: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:27.585: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:27.588: INFO: Number of nodes with available pods: 1 Mar 16 13:23:27.588: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:23:28.580: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:23:28.583: INFO: Number of nodes with available pods: 2 Mar 16 13:23:28.583: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5314, will wait for the garbage collector to delete the pods Mar 16 13:23:28.658: INFO: Deleting DaemonSet.extensions daemon-set took: 6.477748ms Mar 16 13:23:28.958: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.236679ms Mar 16 13:23:43.061: INFO: Number of nodes with available pods: 0 Mar 16 13:23:43.061: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:23:43.063: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5314/daemonsets","resourceVersion":"273004"},"items":null} Mar 16 13:23:43.066: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5314/pods","resourceVersion":"273004"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:23:43.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5314" for this suite. • [SLOW TEST:42.773 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":65,"skipped":1034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:23:43.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:23:43.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9735" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":66,"skipped":1060,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:23:43.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:23:59.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2322" for this suite. • [SLOW TEST:16.333 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":67,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:23:59.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-35bd00f7-d835-4bed-beb1-eddcb798b727 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:23:59.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8546" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":68,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:23:59.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-9cfc21c1-fe6e-4eda-b17b-a09c37ba37ff STEP: Creating a pod to test consume secrets Mar 16 13:23:59.638: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37" in namespace "projected-1861" to be "Succeeded or Failed" Mar 16 13:23:59.640: INFO: Pod "pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187433ms Mar 16 13:24:01.644: INFO: Pod "pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005811434s Mar 16 13:24:03.647: INFO: Pod "pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009759906s STEP: Saw pod success Mar 16 13:24:03.648: INFO: Pod "pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37" satisfied condition "Succeeded or Failed" Mar 16 13:24:03.651: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:24:03.671: INFO: Waiting for pod pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37 to disappear Mar 16 13:24:03.689: INFO: Pod pod-projected-secrets-228d80da-bf09-48a6-bc14-aa7b714ace37 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:03.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1861" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1127,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:03.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:14.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2693" for this suite. • [SLOW TEST:11.103 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":70,"skipped":1138,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:14.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:24:14.877: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:15.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3207" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":71,"skipped":1144,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:15.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 13:24:15.966: INFO: Waiting up to 5m0s for pod "pod-943c7bf4-6055-4175-b81b-71549b7cc75d" in namespace "emptydir-2287" to be "Succeeded or Failed" Mar 16 13:24:15.981: INFO: Pod "pod-943c7bf4-6055-4175-b81b-71549b7cc75d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.88824ms Mar 16 13:24:17.984: INFO: Pod "pod-943c7bf4-6055-4175-b81b-71549b7cc75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018326803s Mar 16 13:24:19.993: INFO: Pod "pod-943c7bf4-6055-4175-b81b-71549b7cc75d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026906342s STEP: Saw pod success Mar 16 13:24:19.993: INFO: Pod "pod-943c7bf4-6055-4175-b81b-71549b7cc75d" satisfied condition "Succeeded or Failed" Mar 16 13:24:19.995: INFO: Trying to get logs from node latest-worker pod pod-943c7bf4-6055-4175-b81b-71549b7cc75d container test-container: STEP: delete the pod Mar 16 13:24:20.012: INFO: Waiting for pod pod-943c7bf4-6055-4175-b81b-71549b7cc75d to disappear Mar 16 13:24:20.028: INFO: Pod pod-943c7bf4-6055-4175-b81b-71549b7cc75d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:20.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2287" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:20.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 16 13:24:20.126: INFO: >>> kubeConfig: /root/.kube/config Mar 16 13:24:22.097: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:32.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2753" for this suite. • [SLOW TEST:12.718 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":73,"skipped":1216,"failed":0} SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:32.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3454.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3454.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3454.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3454.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3454.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3454.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:24:38.905: INFO: DNS probes using dns-3454/dns-test-4162d80b-b5a6-4c04-bea1-28188002aa93 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:38.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3454" for this suite. • [SLOW TEST:6.243 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":74,"skipped":1219,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:38.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:24:50.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8328" for this suite. • [SLOW TEST:11.422 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":75,"skipped":1230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:24:50.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-26c7814c-2a00-4254-8256-46654a515446 in namespace container-probe-4536 Mar 16 13:24:54.488: INFO: Started pod liveness-26c7814c-2a00-4254-8256-46654a515446 in namespace container-probe-4536 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:24:54.491: INFO: Initial restart count of pod liveness-26c7814c-2a00-4254-8256-46654a515446 is 0 Mar 16 13:25:10.607: INFO: Restart count of pod container-probe-4536/liveness-26c7814c-2a00-4254-8256-46654a515446 is now 1 (16.116021565s elapsed) Mar 16 13:25:30.845: INFO: Restart count of pod container-probe-4536/liveness-26c7814c-2a00-4254-8256-46654a515446 is now 2 (36.353547743s elapsed) Mar 16 13:25:54.369: INFO: Restart count of pod container-probe-4536/liveness-26c7814c-2a00-4254-8256-46654a515446 is now 3 (59.87754565s elapsed) Mar 16 13:26:10.575: INFO: Restart count of pod container-probe-4536/liveness-26c7814c-2a00-4254-8256-46654a515446 is now 4 (1m16.084310292s elapsed) Mar 16 13:27:19.021: INFO: Restart count of pod container-probe-4536/liveness-26c7814c-2a00-4254-8256-46654a515446 is now 5 (2m24.529981765s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:27:19.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4536" for this suite. • [SLOW TEST:148.644 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":76,"skipped":1287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:27:19.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:27:19.719: INFO: Create a RollingUpdate DaemonSet Mar 16 13:27:19.722: INFO: Check that daemon pods launch on every node of the cluster Mar 16 13:27:19.754: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:19.770: INFO: Number of nodes with available pods: 0 Mar 16 13:27:19.770: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:27:20.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:20.778: INFO: Number of nodes with available pods: 0 Mar 16 13:27:20.778: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:27:21.966: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:21.970: INFO: Number of nodes with available pods: 0 Mar 16 13:27:21.970: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:27:22.823: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:22.827: INFO: Number of nodes with available pods: 0 Mar 16 13:27:22.827: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:27:23.775: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:23.779: INFO: Number of nodes with available pods: 0 Mar 16 13:27:23.779: INFO: Node latest-worker is running more than one daemon pod Mar 16 13:27:24.778: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:24.809: INFO: Number of nodes with available pods: 2 Mar 16 13:27:24.809: INFO: Number of running nodes: 2, number of available pods: 2 Mar 16 13:27:24.809: INFO: Update the DaemonSet to trigger a rollout Mar 16 13:27:24.816: INFO: Updating DaemonSet daemon-set Mar 16 13:27:33.848: INFO: Roll back the DaemonSet before rollout is complete Mar 16 13:27:33.853: INFO: Updating DaemonSet daemon-set Mar 16 13:27:33.853: INFO: Make sure DaemonSet rollback is complete Mar 16 13:27:33.870: INFO: Wrong image for pod: daemon-set-npnvb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 16 13:27:33.870: INFO: Pod daemon-set-npnvb is not available Mar 16 13:27:34.014: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:35.188: INFO: Wrong image for pod: daemon-set-npnvb. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 16 13:27:35.188: INFO: Pod daemon-set-npnvb is not available Mar 16 13:27:35.219: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 13:27:36.026: INFO: Pod daemon-set-z8jq8 is not available Mar 16 13:27:36.030: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-170, will wait for the garbage collector to delete the pods Mar 16 13:27:36.134: INFO: Deleting DaemonSet.extensions daemon-set took: 6.168623ms Mar 16 13:27:36.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.313451ms Mar 16 13:27:40.606: INFO: Number of nodes with available pods: 0 Mar 16 13:27:40.606: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 13:27:40.608: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-170/daemonsets","resourceVersion":"274077"},"items":null} Mar 16 13:27:40.611: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-170/pods","resourceVersion":"274077"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:27:40.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-170" for this suite. • [SLOW TEST:21.559 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":77,"skipped":1338,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:27:40.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:27:40.786: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 13:27:43.740: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1974 create -f -' Mar 16 13:27:49.418: INFO: stderr: "" Mar 16 13:27:49.418: INFO: stdout: "e2e-test-crd-publish-openapi-5570-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 16 13:27:49.418: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1974 delete e2e-test-crd-publish-openapi-5570-crds test-cr' Mar 16 13:27:49.551: INFO: stderr: "" Mar 16 13:27:49.551: INFO: stdout: "e2e-test-crd-publish-openapi-5570-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 16 13:27:49.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1974 apply -f -' Mar 16 13:27:49.965: INFO: stderr: "" Mar 16 13:27:49.965: INFO: stdout: "e2e-test-crd-publish-openapi-5570-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 16 13:27:49.965: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1974 delete e2e-test-crd-publish-openapi-5570-crds test-cr' Mar 16 13:27:50.104: INFO: stderr: "" Mar 16 13:27:50.104: INFO: stdout: "e2e-test-crd-publish-openapi-5570-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 16 13:27:50.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5570-crds' Mar 16 13:27:50.474: INFO: stderr: "" Mar 16 13:27:50.474: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5570-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:27:53.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1974" for this suite. • [SLOW TEST:12.768 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":78,"skipped":1347,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:27:53.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:27:53.540: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 16 13:27:55.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7797 create -f -' Mar 16 13:28:00.871: INFO: stderr: "" Mar 16 13:28:00.871: INFO: stdout: "e2e-test-crd-publish-openapi-7230-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 16 13:28:00.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7797 delete e2e-test-crd-publish-openapi-7230-crds test-cr' Mar 16 13:28:01.016: INFO: stderr: "" Mar 16 13:28:01.016: INFO: stdout: "e2e-test-crd-publish-openapi-7230-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 16 13:28:01.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7797 apply -f -' Mar 16 13:28:01.320: INFO: stderr: "" Mar 16 13:28:01.320: INFO: stdout: "e2e-test-crd-publish-openapi-7230-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 16 13:28:01.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7797 delete e2e-test-crd-publish-openapi-7230-crds test-cr' Mar 16 13:28:01.467: INFO: stderr: "" Mar 16 13:28:01.467: INFO: stdout: "e2e-test-crd-publish-openapi-7230-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 16 13:28:01.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7230-crds' Mar 16 13:28:01.795: INFO: stderr: "" Mar 16 13:28:01.795: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7230-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:28:03.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7797" for this suite. • [SLOW TEST:10.288 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":79,"skipped":1363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:28:03.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 13:28:03.842: INFO: Waiting up to 5m0s for pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b" in namespace "emptydir-2514" to be "Succeeded or Failed" Mar 16 13:28:03.919: INFO: Pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 76.239864ms Mar 16 13:28:05.922: INFO: Pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080102011s Mar 16 13:28:07.926: INFO: Pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b": Phase="Running", Reason="", readiness=true. Elapsed: 4.083898408s Mar 16 13:28:09.930: INFO: Pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088040523s STEP: Saw pod success Mar 16 13:28:09.930: INFO: Pod "pod-c4053209-c369-43d0-9649-48f6bb5caa5b" satisfied condition "Succeeded or Failed" Mar 16 13:28:09.933: INFO: Trying to get logs from node latest-worker2 pod pod-c4053209-c369-43d0-9649-48f6bb5caa5b container test-container: STEP: delete the pod Mar 16 13:28:10.035: INFO: Waiting for pod pod-c4053209-c369-43d0-9649-48f6bb5caa5b to disappear Mar 16 13:28:10.071: INFO: Pod pod-c4053209-c369-43d0-9649-48f6bb5caa5b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:28:10.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2514" for this suite. • [SLOW TEST:6.395 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":80,"skipped":1387,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:28:10.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 16 13:28:10.178: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 13:28:10.356: INFO: Waiting for terminating namespaces to be deleted... Mar 16 13:28:10.358: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 16 13:28:10.379: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:28:10.379: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 13:28:10.379: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:28:10.379: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:28:10.379: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 16 13:28:10.384: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:28:10.384: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 13:28:10.384: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 13:28:10.384: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Mar 16 13:28:10.583: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Mar 16 13:28:10.583: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Mar 16 13:28:10.583: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Mar 16 13:28:10.583: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Mar 16 13:28:10.583: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Mar 16 13:28:10.589: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7.15fccbd665aa3572], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9886/filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7.15fccbd6b1c9a343], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7.15fccbd72020d302], Reason = [Created], Message = [Created container filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7] STEP: Considering event: Type = [Normal], Name = [filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7.15fccbd740affe6a], Reason = [Started], Message = [Started container filler-pod-016d2c2d-9f05-4837-a22c-ced51dfddca7] STEP: Considering event: Type = [Normal], Name = [filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d.15fccbd66c3b6d91], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9886/filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d.15fccbd6f8aaa851], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d.15fccbd748ebba56], Reason = [Created], Message = [Created container filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d] STEP: Considering event: Type = [Normal], Name = [filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d.15fccbd758a1cf3d], Reason = [Started], Message = [Started container filler-pod-7b34e128-2850-440c-acd0-ca8df5e6961d] STEP: Considering event: Type = [Warning], Name = [additional-pod.15fccbd7d42a5b14], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:28:17.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9886" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.025 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":81,"skipped":1399,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:28:18.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:28:26.420: INFO: DNS probes using dns-test-c3bcdcf1-9cb2-445c-9344-81f8a60c80af succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:28:35.307: INFO: File wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:35.311: INFO: File jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:35.311: INFO: Lookups using dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f failed for: [wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local] Mar 16 13:28:40.315: INFO: File wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:40.318: INFO: File jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:40.318: INFO: Lookups using dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f failed for: [wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local] Mar 16 13:28:45.386: INFO: File wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:45.390: INFO: File jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:45.390: INFO: Lookups using dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f failed for: [wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local] Mar 16 13:28:50.315: INFO: File wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:50.318: INFO: File jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local from pod dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 16 13:28:50.318: INFO: Lookups using dns-2255/dns-test-8b741e50-d3a1-463d-a661-b50ad984629f failed for: [wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local] Mar 16 13:28:55.319: INFO: DNS probes using dns-test-8b741e50-d3a1-463d-a661-b50ad984629f succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2255.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2255.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:29:02.634: INFO: DNS probes using dns-test-a5b42c7d-b92a-4656-80ea-8dafe1a721b3 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:02.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2255" for this suite. • [SLOW TEST:44.923 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":82,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:03.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:03.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6387" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":83,"skipped":1434,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:03.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:29:03.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b" in namespace "projected-6825" to be "Succeeded or Failed" Mar 16 13:29:04.004: INFO: Pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.355138ms Mar 16 13:29:06.008: INFO: Pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027158146s Mar 16 13:29:08.012: INFO: Pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030975628s Mar 16 13:29:10.045: INFO: Pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063997298s STEP: Saw pod success Mar 16 13:29:10.045: INFO: Pod "downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b" satisfied condition "Succeeded or Failed" Mar 16 13:29:10.067: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b container client-container: STEP: delete the pod Mar 16 13:29:10.133: INFO: Waiting for pod downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b to disappear Mar 16 13:29:10.139: INFO: Pod downwardapi-volume-769e8964-ce52-419c-bcad-e2e10932bd1b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:10.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6825" for this suite. • [SLOW TEST:6.795 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":84,"skipped":1439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:10.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:29:11.506: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:29:13.516: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962151, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962151, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962151, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962151, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:29:16.597: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:17.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-498" for this suite. STEP: Destroying namespace "webhook-498-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.862 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":85,"skipped":1481,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:18.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:22.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6458" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1482,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:22.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 16 13:29:23.398: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 16 13:29:25.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:29:27.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962163, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:29:30.560: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:29:30.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:31.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6802" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.668 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":87,"skipped":1490,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:32.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-5d75f9d2-9547-4e43-9896-a8ae941ec654 STEP: Creating a pod to test consume configMaps Mar 16 13:29:33.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460" in namespace "configmap-2371" to be "Succeeded or Failed" Mar 16 13:29:33.246: INFO: Pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460": Phase="Pending", Reason="", readiness=false. Elapsed: 109.693065ms Mar 16 13:29:35.250: INFO: Pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113738645s Mar 16 13:29:38.435: INFO: Pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460": Phase="Running", Reason="", readiness=true. Elapsed: 5.298337266s Mar 16 13:29:40.438: INFO: Pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.302032046s STEP: Saw pod success Mar 16 13:29:40.438: INFO: Pod "pod-configmaps-8e67a336-b705-423b-b052-324a65242460" satisfied condition "Succeeded or Failed" Mar 16 13:29:40.441: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8e67a336-b705-423b-b052-324a65242460 container configmap-volume-test: STEP: delete the pod Mar 16 13:29:40.555: INFO: Waiting for pod pod-configmaps-8e67a336-b705-423b-b052-324a65242460 to disappear Mar 16 13:29:40.577: INFO: Pod pod-configmaps-8e67a336-b705-423b-b052-324a65242460 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:40.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2371" for this suite. • [SLOW TEST:8.238 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1493,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:40.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Mar 16 13:29:40.788: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Mar 16 13:29:40.955: INFO: stderr: "" Mar 16 13:29:40.955: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:29:40.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3423" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":89,"skipped":1496,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:29:41.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-363 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:29:41.256: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 16 13:29:41.471: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:29:43.725: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:29:45.475: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:29:47.477: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:29:49.626: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:29:51.474: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:29:53.475: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:29:55.475: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 16 13:29:55.480: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 16 13:29:57.484: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 16 13:29:59.484: INFO: The status of Pod netserver-1 is Running (Ready = false) Mar 16 13:30:01.484: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 16 13:30:07.951: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.73 8081 | grep -v '^\s*$'] Namespace:pod-network-test-363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:30:07.951: INFO: >>> kubeConfig: /root/.kube/config I0316 13:30:07.988103 7 log.go:172] (0xc0028af810) (0xc0019fce60) Create stream I0316 13:30:07.988132 7 log.go:172] (0xc0028af810) (0xc0019fce60) Stream added, broadcasting: 1 I0316 13:30:07.990222 7 log.go:172] (0xc0028af810) Reply frame received for 1 I0316 13:30:07.990270 7 log.go:172] (0xc0028af810) (0xc000fde000) Create stream I0316 13:30:07.990291 7 log.go:172] (0xc0028af810) (0xc000fde000) Stream added, broadcasting: 3 I0316 13:30:07.991449 7 log.go:172] (0xc0028af810) Reply frame received for 3 I0316 13:30:07.991497 7 log.go:172] (0xc0028af810) (0xc002743680) Create stream I0316 13:30:07.991519 7 log.go:172] (0xc0028af810) (0xc002743680) Stream added, broadcasting: 5 I0316 13:30:07.992946 7 log.go:172] (0xc0028af810) Reply frame received for 5 I0316 13:30:09.045368 7 log.go:172] (0xc0028af810) Data frame received for 5 I0316 13:30:09.045407 7 log.go:172] (0xc0028af810) Data frame received for 3 I0316 13:30:09.045441 7 log.go:172] (0xc000fde000) (3) Data frame handling I0316 13:30:09.045521 7 log.go:172] (0xc000fde000) (3) Data frame sent I0316 13:30:09.045547 7 log.go:172] (0xc0028af810) Data frame received for 3 I0316 13:30:09.045628 7 log.go:172] (0xc000fde000) (3) Data frame handling I0316 13:30:09.045658 7 log.go:172] (0xc002743680) (5) Data frame handling I0316 13:30:09.047410 7 log.go:172] (0xc0028af810) Data frame received for 1 I0316 13:30:09.047429 7 log.go:172] (0xc0019fce60) (1) Data frame handling I0316 13:30:09.047449 7 log.go:172] (0xc0019fce60) (1) Data frame sent I0316 13:30:09.047464 7 log.go:172] (0xc0028af810) (0xc0019fce60) Stream removed, broadcasting: 1 I0316 13:30:09.047551 7 log.go:172] (0xc0028af810) (0xc0019fce60) Stream removed, broadcasting: 1 I0316 13:30:09.047567 7 log.go:172] (0xc0028af810) (0xc000fde000) Stream removed, broadcasting: 3 I0316 13:30:09.047657 7 log.go:172] (0xc0028af810) Go away received I0316 13:30:09.047759 7 log.go:172] (0xc0028af810) (0xc002743680) Stream removed, broadcasting: 5 Mar 16 13:30:09.047: INFO: Found all expected endpoints: [netserver-0] Mar 16 13:30:09.050: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.5 8081 | grep -v '^\s*$'] Namespace:pod-network-test-363 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:30:09.050: INFO: >>> kubeConfig: /root/.kube/config I0316 13:30:09.089247 7 log.go:172] (0xc002be2580) (0xc002743a40) Create stream I0316 13:30:09.089276 7 log.go:172] (0xc002be2580) (0xc002743a40) Stream added, broadcasting: 1 I0316 13:30:09.090796 7 log.go:172] (0xc002be2580) Reply frame received for 1 I0316 13:30:09.090838 7 log.go:172] (0xc002be2580) (0xc002743ae0) Create stream I0316 13:30:09.090850 7 log.go:172] (0xc002be2580) (0xc002743ae0) Stream added, broadcasting: 3 I0316 13:30:09.091532 7 log.go:172] (0xc002be2580) Reply frame received for 3 I0316 13:30:09.091556 7 log.go:172] (0xc002be2580) (0xc002743b80) Create stream I0316 13:30:09.091564 7 log.go:172] (0xc002be2580) (0xc002743b80) Stream added, broadcasting: 5 I0316 13:30:09.092179 7 log.go:172] (0xc002be2580) Reply frame received for 5 I0316 13:30:10.156735 7 log.go:172] (0xc002be2580) Data frame received for 5 I0316 13:30:10.156772 7 log.go:172] (0xc002743b80) (5) Data frame handling I0316 13:30:10.156796 7 log.go:172] (0xc002be2580) Data frame received for 3 I0316 13:30:10.156805 7 log.go:172] (0xc002743ae0) (3) Data frame handling I0316 13:30:10.156815 7 log.go:172] (0xc002743ae0) (3) Data frame sent I0316 13:30:10.156824 7 log.go:172] (0xc002be2580) Data frame received for 3 I0316 13:30:10.156830 7 log.go:172] (0xc002743ae0) (3) Data frame handling I0316 13:30:10.158445 7 log.go:172] (0xc002be2580) Data frame received for 1 I0316 13:30:10.158487 7 log.go:172] (0xc002743a40) (1) Data frame handling I0316 13:30:10.158520 7 log.go:172] (0xc002743a40) (1) Data frame sent I0316 13:30:10.158547 7 log.go:172] (0xc002be2580) (0xc002743a40) Stream removed, broadcasting: 1 I0316 13:30:10.158663 7 log.go:172] (0xc002be2580) Go away received I0316 13:30:10.158697 7 log.go:172] (0xc002be2580) (0xc002743a40) Stream removed, broadcasting: 1 I0316 13:30:10.158749 7 log.go:172] (0xc002be2580) (0xc002743ae0) Stream removed, broadcasting: 3 I0316 13:30:10.158764 7 log.go:172] (0xc002be2580) (0xc002743b80) Stream removed, broadcasting: 5 Mar 16 13:30:10.158: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:10.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-363" for this suite. • [SLOW TEST:29.000 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1511,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:10.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-cb2645ba-3d7c-41cd-9ed1-556a37974bf2 STEP: Creating a pod to test consume configMaps Mar 16 13:30:10.418: INFO: Waiting up to 5m0s for pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418" in namespace "configmap-4625" to be "Succeeded or Failed" Mar 16 13:30:10.446: INFO: Pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418": Phase="Pending", Reason="", readiness=false. Elapsed: 27.471475ms Mar 16 13:30:12.450: INFO: Pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031408827s Mar 16 13:30:14.453: INFO: Pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418": Phase="Running", Reason="", readiness=true. Elapsed: 4.034718125s Mar 16 13:30:16.662: INFO: Pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.243971385s STEP: Saw pod success Mar 16 13:30:16.663: INFO: Pod "pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418" satisfied condition "Succeeded or Failed" Mar 16 13:30:16.692: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418 container configmap-volume-test: STEP: delete the pod Mar 16 13:30:17.473: INFO: Waiting for pod pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418 to disappear Mar 16 13:30:17.608: INFO: Pod pod-configmaps-b5d14120-1b7f-41b3-b805-af08e610c418 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:17.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4625" for this suite. • [SLOW TEST:7.502 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1518,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:17.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 16 13:30:18.678: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:27.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7154" for this suite. • [SLOW TEST:10.168 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":92,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:27.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Mar 16 13:30:28.353: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:28.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3627" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":93,"skipped":1545,"failed":0} SSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:28.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Mar 16 13:30:28.910: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Mar 16 13:30:28.922: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 16 13:30:28.922: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Mar 16 13:30:28.927: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Mar 16 13:30:28.928: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Mar 16 13:30:29.172: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Mar 16 13:30:29.172: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Mar 16 13:30:36.889: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:36.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1497" for this suite. • [SLOW TEST:8.526 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":94,"skipped":1551,"failed":0} [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:37.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 16 13:30:37.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8450' Mar 16 13:30:37.746: INFO: stderr: "" Mar 16 13:30:37.746: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 13:30:38.750: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:38.750: INFO: Found 0 / 1 Mar 16 13:30:39.820: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:39.820: INFO: Found 0 / 1 Mar 16 13:30:40.765: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:40.765: INFO: Found 0 / 1 Mar 16 13:30:41.818: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:41.818: INFO: Found 1 / 1 Mar 16 13:30:41.818: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 16 13:30:41.821: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:41.821: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 13:30:41.821: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-n9gmm --namespace=kubectl-8450 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 16 13:30:42.027: INFO: stderr: "" Mar 16 13:30:42.027: INFO: stdout: "pod/agnhost-master-n9gmm patched\n" STEP: checking annotations Mar 16 13:30:42.083: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:30:42.083: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:42.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8450" for this suite. • [SLOW TEST:5.155 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":95,"skipped":1551,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:42.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 16 13:30:42.730: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:54.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2692" for this suite. • [SLOW TEST:11.907 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":96,"skipped":1579,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:54.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:30:58.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3795" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1633,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:30:58.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 16 13:30:58.665: INFO: Waiting up to 5m0s for pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4" in namespace "emptydir-5237" to be "Succeeded or Failed" Mar 16 13:30:58.675: INFO: Pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.667566ms Mar 16 13:31:01.052: INFO: Pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.386869551s Mar 16 13:31:03.115: INFO: Pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4": Phase="Running", Reason="", readiness=true. Elapsed: 4.448992775s Mar 16 13:31:05.117: INFO: Pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.451286301s STEP: Saw pod success Mar 16 13:31:05.117: INFO: Pod "pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4" satisfied condition "Succeeded or Failed" Mar 16 13:31:05.119: INFO: Trying to get logs from node latest-worker pod pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4 container test-container: STEP: delete the pod Mar 16 13:31:05.215: INFO: Waiting for pod pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4 to disappear Mar 16 13:31:05.222: INFO: Pod pod-973d08c1-a3ef-4b1e-b2a6-0d5352b2f0e4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:05.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5237" for this suite. • [SLOW TEST:6.858 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1656,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:05.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 16 13:31:05.830: INFO: Waiting up to 5m0s for pod "downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50" in namespace "downward-api-4578" to be "Succeeded or Failed" Mar 16 13:31:05.887: INFO: Pod "downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50": Phase="Pending", Reason="", readiness=false. Elapsed: 56.293135ms Mar 16 13:31:07.905: INFO: Pod "downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074303477s Mar 16 13:31:09.926: INFO: Pod "downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095867944s STEP: Saw pod success Mar 16 13:31:09.926: INFO: Pod "downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50" satisfied condition "Succeeded or Failed" Mar 16 13:31:09.929: INFO: Trying to get logs from node latest-worker pod downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50 container dapi-container: STEP: delete the pod Mar 16 13:31:10.418: INFO: Waiting for pod downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50 to disappear Mar 16 13:31:10.421: INFO: Pod downward-api-31d07d0b-3abf-4058-875d-188b7b37ed50 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:10.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4578" for this suite. • [SLOW TEST:5.210 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1678,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:10.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 13:31:10.608: INFO: Waiting up to 5m0s for pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267" in namespace "emptydir-3471" to be "Succeeded or Failed" Mar 16 13:31:10.660: INFO: Pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267": Phase="Pending", Reason="", readiness=false. Elapsed: 51.868055ms Mar 16 13:31:12.775: INFO: Pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167165485s Mar 16 13:31:14.778: INFO: Pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267": Phase="Pending", Reason="", readiness=false. Elapsed: 4.170672216s Mar 16 13:31:16.782: INFO: Pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17394644s STEP: Saw pod success Mar 16 13:31:16.782: INFO: Pod "pod-b1a65b0a-d752-41e3-b86f-6ce74f083267" satisfied condition "Succeeded or Failed" Mar 16 13:31:16.784: INFO: Trying to get logs from node latest-worker pod pod-b1a65b0a-d752-41e3-b86f-6ce74f083267 container test-container: STEP: delete the pod Mar 16 13:31:16.910: INFO: Waiting for pod pod-b1a65b0a-d752-41e3-b86f-6ce74f083267 to disappear Mar 16 13:31:16.933: INFO: Pod pod-b1a65b0a-d752-41e3-b86f-6ce74f083267 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:16.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3471" for this suite. • [SLOW TEST:6.503 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":100,"skipped":1679,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:16.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-9336 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9336 to expose endpoints map[] Mar 16 13:31:17.723: INFO: Get endpoints failed (86.170313ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 16 13:31:18.732: INFO: successfully validated that service endpoint-test2 in namespace services-9336 exposes endpoints map[] (1.095529247s elapsed) STEP: Creating pod pod1 in namespace services-9336 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9336 to expose endpoints map[pod1:[80]] Mar 16 13:31:23.360: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.623291684s elapsed, will retry) Mar 16 13:31:24.366: INFO: successfully validated that service endpoint-test2 in namespace services-9336 exposes endpoints map[pod1:[80]] (5.628418755s elapsed) STEP: Creating pod pod2 in namespace services-9336 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9336 to expose endpoints map[pod1:[80] pod2:[80]] Mar 16 13:31:28.850: INFO: Unexpected endpoints: found map[432f2444-78f7-4384-ae72-90254f936be7:[80]], expected map[pod1:[80] pod2:[80]] (4.480157157s elapsed, will retry) Mar 16 13:31:29.859: INFO: successfully validated that service endpoint-test2 in namespace services-9336 exposes endpoints map[pod1:[80] pod2:[80]] (5.489344817s elapsed) STEP: Deleting pod pod1 in namespace services-9336 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9336 to expose endpoints map[pod2:[80]] Mar 16 13:31:31.114: INFO: successfully validated that service endpoint-test2 in namespace services-9336 exposes endpoints map[pod2:[80]] (1.226985033s elapsed) STEP: Deleting pod pod2 in namespace services-9336 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9336 to expose endpoints map[] Mar 16 13:31:32.331: INFO: successfully validated that service endpoint-test2 in namespace services-9336 exposes endpoints map[] (1.211523848s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:32.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9336" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:16.024 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":101,"skipped":1694,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:32.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 16 13:31:38.206: INFO: Successfully updated pod "pod-update-activedeadlineseconds-684b9bb6-fb72-4ee6-8d7e-f72b38302fee" Mar 16 13:31:38.206: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-684b9bb6-fb72-4ee6-8d7e-f72b38302fee" in namespace "pods-6075" to be "terminated due to deadline exceeded" Mar 16 13:31:38.268: INFO: Pod "pod-update-activedeadlineseconds-684b9bb6-fb72-4ee6-8d7e-f72b38302fee": Phase="Running", Reason="", readiness=true. Elapsed: 61.950794ms Mar 16 13:31:41.029: INFO: Pod "pod-update-activedeadlineseconds-684b9bb6-fb72-4ee6-8d7e-f72b38302fee": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.822828011s Mar 16 13:31:41.029: INFO: Pod "pod-update-activedeadlineseconds-684b9bb6-fb72-4ee6-8d7e-f72b38302fee" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:41.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6075" for this suite. • [SLOW TEST:8.142 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1703,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:41.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 13:31:42.901: INFO: Waiting up to 5m0s for pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1" in namespace "emptydir-8704" to be "Succeeded or Failed" Mar 16 13:31:43.174: INFO: Pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1": Phase="Pending", Reason="", readiness=false. Elapsed: 273.858376ms Mar 16 13:31:45.179: INFO: Pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277947331s Mar 16 13:31:47.268: INFO: Pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367800946s Mar 16 13:31:49.272: INFO: Pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371406855s STEP: Saw pod success Mar 16 13:31:49.272: INFO: Pod "pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1" satisfied condition "Succeeded or Failed" Mar 16 13:31:49.275: INFO: Trying to get logs from node latest-worker pod pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1 container test-container: STEP: delete the pod Mar 16 13:31:49.373: INFO: Waiting for pod pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1 to disappear Mar 16 13:31:49.587: INFO: Pod pod-22034ec5-63b2-464c-b8ea-86d8bef7eed1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:49.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8704" for this suite. • [SLOW TEST:8.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1707,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:49.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:31:58.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3381" for this suite. • [SLOW TEST:9.014 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":104,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:31:58.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:31:58.728: INFO: Creating deployment "webserver-deployment" Mar 16 13:31:58.732: INFO: Waiting for observed generation 1 Mar 16 13:32:00.741: INFO: Waiting for all required pods to come up Mar 16 13:32:00.744: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 16 13:32:12.797: INFO: Waiting for deployment "webserver-deployment" to complete Mar 16 13:32:12.803: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 16 13:32:12.809: INFO: Updating deployment webserver-deployment Mar 16 13:32:12.809: INFO: Waiting for observed generation 2 Mar 16 13:32:15.212: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 16 13:32:15.425: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 16 13:32:15.474: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 16 13:32:15.509: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 16 13:32:15.509: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 16 13:32:15.512: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 16 13:32:15.515: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 16 13:32:15.515: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 16 13:32:15.618: INFO: Updating deployment webserver-deployment Mar 16 13:32:15.618: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 16 13:32:15.808: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 16 13:32:15.840: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 16 13:32:18.509: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-342 /apis/apps/v1/namespaces/deployment-342/deployments/webserver-deployment 1ee933b5-f8b4-49e3-b6ac-ca4331ede444 276202 3 2020-03-16 13:31:58 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030225e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-16 13:32:15 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-16 13:32:16 +0000 UTC,LastTransitionTime:2020-03-16 13:31:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 16 13:32:18.757: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-342 /apis/apps/v1/namespaces/deployment-342/replicasets/webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 276199 3 2020-03-16 13:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1ee933b5-f8b4-49e3-b6ac-ca4331ede444 0xc003022b37 0xc003022b38}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003022ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:32:18.757: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 16 13:32:18.757: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-342 /apis/apps/v1/namespaces/deployment-342/replicasets/webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 276177 3 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1ee933b5-f8b4-49e3-b6ac-ca4331ede444 0xc003022a77 0xc003022a78}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003022ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:32:19.104: INFO: Pod "webserver-deployment-595b5b9587-2mmpq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2mmpq webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-2mmpq 0c7bd549-ab1c-49da-8867-8cd5d2abb8e6 276248 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc0030230e7 0xc0030230e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.104: INFO: Pod "webserver-deployment-595b5b9587-64czt" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-64czt webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-64czt 6f81f971-3571-4d93-b78e-2c1092188a33 276207 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023247 0xc003023248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.105: INFO: Pod "webserver-deployment-595b5b9587-6s4s4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6s4s4 webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-6s4s4 db3f3407-4205-493b-a645-bf0f320618eb 276036 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc0030233a7 0xc0030233a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.88,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a7e1ef8913e43d527bb8fccaed48a3283ef90da31a2f5f31020a543b8b0ad45,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.105: INFO: Pod "webserver-deployment-595b5b9587-78h4q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-78h4q webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-78h4q 63297e98-b846-4fa6-bb19-d11ae48ac254 276194 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023527 0xc003023528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.105: INFO: Pod "webserver-deployment-595b5b9587-9d42k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9d42k webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-9d42k 7416a2a3-be9a-49c3-b793-304fa19719c6 276016 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023687 0xc003023688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.86,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0d080bd2c66bdad1200076b59bc08f53b93f5cdaa5d64196ce0e345e91948f26,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.106: INFO: Pod "webserver-deployment-595b5b9587-b5952" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b5952 webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-b5952 7aff12f9-688b-42df-acb8-bf2f6e168e22 276206 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023807 0xc003023808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.106: INFO: Pod "webserver-deployment-595b5b9587-bxf6z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bxf6z webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-bxf6z 0bdc1f05-4093-4328-8aad-ec7e350f3aee 276216 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023967 0xc003023968}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.106: INFO: Pod "webserver-deployment-595b5b9587-d7h6m" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d7h6m webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-d7h6m 6ca4026a-6108-4826-9b3f-7d85dbaf58fa 276252 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023ac7 0xc003023ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.106: INFO: Pod "webserver-deployment-595b5b9587-djc5l" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-djc5l webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-djc5l 389cf179-4a3d-4457-8591-7f94a6ff975f 276191 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023c27 0xc003023c28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.107: INFO: Pod "webserver-deployment-595b5b9587-dkz8h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dkz8h webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-dkz8h 0da6cd06-b8bd-42c1-be0a-d239f215329a 276042 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023d87 0xc003023d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.87,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a8565d0d28f443ff92a3382b9aa559c10a14ebbd9d34205cbdc3eb3e99a75016,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.87,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.107: INFO: Pod "webserver-deployment-595b5b9587-fhqsn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fhqsn webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-fhqsn 31d9a59d-50d1-4d19-a83d-b38de75936a6 276200 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003023f07 0xc003023f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.107: INFO: Pod "webserver-deployment-595b5b9587-fzhht" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fzhht webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-fzhht 9552f744-688c-42e8-a83d-f293b46ca3aa 276214 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100067 0xc003100068}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.107: INFO: Pod "webserver-deployment-595b5b9587-hxwqm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hxwqm webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-hxwqm 4fdbb0eb-d5e2-4c1c-b0eb-45bb5ec6cf6b 276026 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc0031001c7 0xc0031001c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.14,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://892b0594058517efaecd4a2bf1cb37473338677b5a5a50a48c8507a053e67a8c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.108: INFO: Pod "webserver-deployment-595b5b9587-jb7mj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jb7mj webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-jb7mj ce837509-93fd-4952-9080-1cec72734917 276237 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100347 0xc003100348}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.108: INFO: Pod "webserver-deployment-595b5b9587-jcgp5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jcgp5 webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-jcgp5 6c146fe8-6c6d-428a-b7fe-25f55f2eeab7 276231 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc0031004a7 0xc0031004a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.108: INFO: Pod "webserver-deployment-595b5b9587-jrzv7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrzv7 webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-jrzv7 1749465c-cef2-46f1-9dbc-0707a04bc92c 276022 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100607 0xc003100608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.13,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:09 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4a2c25918f22b9265a2a91e78ee7c98f6de179c69b804f705e517225b6f49554,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.108: INFO: Pod "webserver-deployment-595b5b9587-kfc7j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kfc7j webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-kfc7j 690dfa0c-cac1-4787-a2df-4e17e9f8e2b0 276003 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100787 0xc003100788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.12,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://3a3f7acf53a2966ba6086485b0fb1c0bcd8ba10f323d0811d74fc23a85a0f618,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.109: INFO: Pod "webserver-deployment-595b5b9587-pkbgz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pkbgz webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-pkbgz 03861506-8d8b-4850-a546-e0a5b573cadd 275998 0 2020-03-16 13:31:58 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100907 0xc003100908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.85,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0a13441d8a695a131353e2b443d8fec72f867ed000acc6202ae7a7a735bffe06,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.109: INFO: Pod "webserver-deployment-595b5b9587-q2n8q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-q2n8q webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-q2n8q af18a8e8-a739-4d63-b61d-d38db6759241 276240 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100a87 0xc003100a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.109: INFO: Pod "webserver-deployment-595b5b9587-z5v5j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5v5j webserver-deployment-595b5b9587- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-595b5b9587-z5v5j 6826f13c-6fcf-4c15-bc37-4b7508ffb805 276030 0 2020-03-16 13:31:59 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 4647697a-6e3e-491d-8b40-49e73184bd1d 0xc003100be7 0xc003100be8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:31:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.15,StartTime:2020-03-16 13:31:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:32:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5dc6f328395c3f1ba3ddde0c8866e7fb08b3eb395b93a9abb164d889d2ff0acc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.109: INFO: Pod "webserver-deployment-c7997dcc8-2h5rq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2h5rq webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-2h5rq 439fed62-8a02-4489-9a44-8083edd8c33d 276105 0 2020-03-16 13:32:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003100d67 0xc003100d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.110: INFO: Pod "webserver-deployment-c7997dcc8-4djzj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4djzj webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-4djzj a376038d-5501-41c4-a363-1d25cd67ea28 276102 0 2020-03-16 13:32:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003100ee0 0xc003100ee1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.110: INFO: Pod "webserver-deployment-c7997dcc8-5dqn4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5dqn4 webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-5dqn4 009103b2-f30a-4812-a7de-238592e61583 276219 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101050 0xc003101051}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.110: INFO: Pod "webserver-deployment-c7997dcc8-5x2hc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5x2hc webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-5x2hc f674f845-3515-4916-8074-bd61a0069786 276254 0 2020-03-16 13:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc0031011c0 0xc0031011c1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.90,StartTime:2020-03-16 13:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.90,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.110: INFO: Pod "webserver-deployment-c7997dcc8-b7vtz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b7vtz webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-b7vtz 58fb919b-9dc2-49b7-8295-f05545f8fc4b 276218 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101360 0xc003101361}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.111: INFO: Pod "webserver-deployment-c7997dcc8-frzcs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-frzcs webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-frzcs 40e71e65-a657-48a5-a04c-04e7f7346fed 276198 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc0031014d0 0xc0031014d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.111: INFO: Pod "webserver-deployment-c7997dcc8-j884k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j884k webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-j884k 91ecff91-c2e8-464a-87ec-c2972c09a0df 276222 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101640 0xc003101641}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.111: INFO: Pod "webserver-deployment-c7997dcc8-n5s75" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n5s75 webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-n5s75 02587765-34c4-4538-8ae8-518b6d139b99 276226 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc0031017b0 0xc0031017b1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.112: INFO: Pod "webserver-deployment-c7997dcc8-nnl5h" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nnl5h webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-nnl5h 3dab8bf8-ffd7-4edf-8246-83d3917f615b 276100 0 2020-03-16 13:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101920 0xc003101921}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.112: INFO: Pod "webserver-deployment-c7997dcc8-pzgmt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pzgmt webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-pzgmt 9f28c699-a494-4554-8ed6-7b95976210e3 276212 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101a90 0xc003101a91}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.112: INFO: Pod "webserver-deployment-c7997dcc8-qgznb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qgznb webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-qgznb 56c1b70f-79a9-4dc9-ab5d-d2f5b1ee3123 276230 0 2020-03-16 13:32:12 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101c00 0xc003101c01}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.17,StartTime:2020-03-16 13:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.112: INFO: Pod "webserver-deployment-c7997dcc8-qqs6g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qqs6g webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-qqs6g f4b396d7-a2e0-4f37-b16e-1eb921850153 276225 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101da0 0xc003101da1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 13:32:19.112: INFO: Pod "webserver-deployment-c7997dcc8-z98v4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z98v4 webserver-deployment-c7997dcc8- deployment-342 /api/v1/namespaces/deployment-342/pods/webserver-deployment-c7997dcc8-z98v4 2a1c8dc0-d28c-4a7a-b887-5dd6fcd0d0c1 276247 0 2020-03-16 13:32:15 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 4aef1557-c6f0-47d1-8e38-1102d3cccad6 0xc003101f10 0xc003101f11}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vl6l7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vl6l7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vl6l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:32:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:32:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:32:19.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-342" for this suite. • [SLOW TEST:21.463 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":105,"skipped":1738,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:32:20.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:32:25.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:32:29.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:32:31.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:32:33.450: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:32:35.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:32:38.016: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:32:39.378: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962347, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962345, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:32:42.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:32:44.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4046" for this suite. STEP: Destroying namespace "webhook-4046-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.793 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":106,"skipped":1756,"failed":0} SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:32:47.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2371/configmap-test-8d19f0a6-18ad-4876-97aa-327fff709e57 STEP: Creating a pod to test consume configMaps Mar 16 13:32:51.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65" in namespace "configmap-2371" to be "Succeeded or Failed" Mar 16 13:32:51.642: INFO: Pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65": Phase="Pending", Reason="", readiness=false. Elapsed: 330.398882ms Mar 16 13:32:53.923: INFO: Pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611248635s Mar 16 13:32:55.940: INFO: Pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.62802984s Mar 16 13:32:58.279: INFO: Pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.967042297s STEP: Saw pod success Mar 16 13:32:58.279: INFO: Pod "pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65" satisfied condition "Succeeded or Failed" Mar 16 13:32:58.302: INFO: Trying to get logs from node latest-worker pod pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65 container env-test: STEP: delete the pod Mar 16 13:32:58.875: INFO: Waiting for pod pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65 to disappear Mar 16 13:32:58.939: INFO: Pod pod-configmaps-a8f4644b-a35b-429a-a536-797b6b3c0d65 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:32:58.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2371" for this suite. • [SLOW TEST:11.267 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":107,"skipped":1765,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:32:59.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 16 13:32:59.618: INFO: Waiting up to 5m0s for pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd" in namespace "emptydir-3032" to be "Succeeded or Failed" Mar 16 13:32:59.780: INFO: Pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd": Phase="Pending", Reason="", readiness=false. Elapsed: 161.481119ms Mar 16 13:33:02.012: INFO: Pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.393758413s Mar 16 13:33:04.068: INFO: Pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449516614s Mar 16 13:33:06.134: INFO: Pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.515898311s STEP: Saw pod success Mar 16 13:33:06.134: INFO: Pod "pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd" satisfied condition "Succeeded or Failed" Mar 16 13:33:06.137: INFO: Trying to get logs from node latest-worker2 pod pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd container test-container: STEP: delete the pod Mar 16 13:33:06.397: INFO: Waiting for pod pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd to disappear Mar 16 13:33:06.548: INFO: Pod pod-6e9b0e06-e0a4-4b5c-b314-424120d778cd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:33:06.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3032" for this suite. • [SLOW TEST:7.501 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1778,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:33:06.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Mar 16 13:33:07.120: INFO: Waiting up to 5m0s for pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781" in namespace "containers-9964" to be "Succeeded or Failed" Mar 16 13:33:07.306: INFO: Pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781": Phase="Pending", Reason="", readiness=false. Elapsed: 186.401961ms Mar 16 13:33:09.310: INFO: Pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.19076279s Mar 16 13:33:11.314: INFO: Pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194484181s Mar 16 13:33:13.317: INFO: Pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.197860813s STEP: Saw pod success Mar 16 13:33:13.318: INFO: Pod "client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781" satisfied condition "Succeeded or Failed" Mar 16 13:33:13.320: INFO: Trying to get logs from node latest-worker pod client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781 container test-container: STEP: delete the pod Mar 16 13:33:13.457: INFO: Waiting for pod client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781 to disappear Mar 16 13:33:13.592: INFO: Pod client-containers-8705d9ce-c5b4-42e8-82d4-6c5e4b6eb781 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:33:13.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9964" for this suite. • [SLOW TEST:6.980 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":109,"skipped":1795,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:33:13.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:33:17.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8122" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":110,"skipped":1815,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:33:17.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:33:18.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 16 13:33:19.348: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:19Z generation:1 name:name1 resourceVersion:276895 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1adc77fc-1e68-49eb-bebf-9045bba773b7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 16 13:33:29.353: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:29Z generation:1 name:name2 resourceVersion:276938 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:34625a19-ef52-4cd1-9f03-f5630cb8c402] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 16 13:33:39.358: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:19Z generation:2 name:name1 resourceVersion:276969 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1adc77fc-1e68-49eb-bebf-9045bba773b7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 16 13:33:49.362: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:29Z generation:2 name:name2 resourceVersion:276999 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:34625a19-ef52-4cd1-9f03-f5630cb8c402] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 16 13:33:59.367: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:19Z generation:2 name:name1 resourceVersion:277028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:1adc77fc-1e68-49eb-bebf-9045bba773b7] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 16 13:34:09.422: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-16T13:33:29Z generation:2 name:name2 resourceVersion:277057 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:34625a19-ef52-4cd1-9f03-f5630cb8c402] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:19.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5847" for this suite. • [SLOW TEST:62.052 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":111,"skipped":1837,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:20.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:34:20.855: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:34:22.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962461, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:34:24.875: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962461, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962460, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:34:27.909: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:27.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8649" for this suite. STEP: Destroying namespace "webhook-8649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.330 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":112,"skipped":1844,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:28.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:34:30.396: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:34:32.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:34:34.521: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962470, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:34:37.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 16 13:34:37.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:37.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4435" for this suite. STEP: Destroying namespace "webhook-4435-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.988 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":113,"skipped":1887,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:38.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-d05d28e6-a02c-4cd2-8d79-55355d3153d9 STEP: Creating a pod to test consume secrets Mar 16 13:34:38.691: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50" in namespace "projected-1012" to be "Succeeded or Failed" Mar 16 13:34:39.211: INFO: Pod "pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50": Phase="Pending", Reason="", readiness=false. Elapsed: 520.221638ms Mar 16 13:34:41.214: INFO: Pod "pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523734522s Mar 16 13:34:43.300: INFO: Pod "pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.608876305s STEP: Saw pod success Mar 16 13:34:43.300: INFO: Pod "pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50" satisfied condition "Succeeded or Failed" Mar 16 13:34:43.343: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:34:43.413: INFO: Waiting for pod pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50 to disappear Mar 16 13:34:43.522: INFO: Pod pod-projected-secrets-2779d3f9-19e6-4cfd-b0f3-327586086d50 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:43.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1012" for this suite. • [SLOW TEST:5.208 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1905,"failed":0} [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:43.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 16 13:34:50.323: INFO: Successfully updated pod "labelsupdatea5be5cb7-46bd-453b-8928-5d5338cec299" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:52.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7897" for this suite. • [SLOW TEST:8.841 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":1905,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:52.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-3a0ba00b-71f7-46c0-b338-abda2d86d333 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:34:58.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6525" for this suite. • [SLOW TEST:6.572 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1909,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:34:58.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1120 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:34:59.229: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 16 13:34:59.544: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:35:01.548: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:35:03.547: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:35:05.548: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:07.548: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:09.548: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:11.547: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:13.548: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:15.547: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:17.858: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:19.548: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:35:21.547: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 16 13:35:21.552: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 16 13:35:27.652: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:8080/dial?request=hostname&protocol=http&host=10.244.2.108&port=8080&tries=1'] Namespace:pod-network-test-1120 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:35:27.652: INFO: >>> kubeConfig: /root/.kube/config I0316 13:35:27.691807 7 log.go:172] (0xc002e3a6e0) (0xc000e62640) Create stream I0316 13:35:27.691858 7 log.go:172] (0xc002e3a6e0) (0xc000e62640) Stream added, broadcasting: 1 I0316 13:35:27.693978 7 log.go:172] (0xc002e3a6e0) Reply frame received for 1 I0316 13:35:27.694027 7 log.go:172] (0xc002e3a6e0) (0xc000e62820) Create stream I0316 13:35:27.694044 7 log.go:172] (0xc002e3a6e0) (0xc000e62820) Stream added, broadcasting: 3 I0316 13:35:27.695149 7 log.go:172] (0xc002e3a6e0) Reply frame received for 3 I0316 13:35:27.695193 7 log.go:172] (0xc002e3a6e0) (0xc0024f0a00) Create stream I0316 13:35:27.695211 7 log.go:172] (0xc002e3a6e0) (0xc0024f0a00) Stream added, broadcasting: 5 I0316 13:35:27.696175 7 log.go:172] (0xc002e3a6e0) Reply frame received for 5 I0316 13:35:28.275442 7 log.go:172] (0xc002e3a6e0) Data frame received for 3 I0316 13:35:28.275472 7 log.go:172] (0xc000e62820) (3) Data frame handling I0316 13:35:28.275491 7 log.go:172] (0xc000e62820) (3) Data frame sent I0316 13:35:28.275925 7 log.go:172] (0xc002e3a6e0) Data frame received for 3 I0316 13:35:28.275945 7 log.go:172] (0xc000e62820) (3) Data frame handling I0316 13:35:28.275994 7 log.go:172] (0xc002e3a6e0) Data frame received for 5 I0316 13:35:28.276021 7 log.go:172] (0xc0024f0a00) (5) Data frame handling I0316 13:35:28.277783 7 log.go:172] (0xc002e3a6e0) Data frame received for 1 I0316 13:35:28.277797 7 log.go:172] (0xc000e62640) (1) Data frame handling I0316 13:35:28.277810 7 log.go:172] (0xc000e62640) (1) Data frame sent I0316 13:35:28.277856 7 log.go:172] (0xc002e3a6e0) (0xc000e62640) Stream removed, broadcasting: 1 I0316 13:35:28.277903 7 log.go:172] (0xc002e3a6e0) Go away received I0316 13:35:28.277943 7 log.go:172] (0xc002e3a6e0) (0xc000e62640) Stream removed, broadcasting: 1 I0316 13:35:28.277956 7 log.go:172] (0xc002e3a6e0) (0xc000e62820) Stream removed, broadcasting: 3 I0316 13:35:28.277965 7 log.go:172] (0xc002e3a6e0) (0xc0024f0a00) Stream removed, broadcasting: 5 Mar 16 13:35:28.278: INFO: Waiting for responses: map[] Mar 16 13:35:28.280: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.109:8080/dial?request=hostname&protocol=http&host=10.244.1.34&port=8080&tries=1'] Namespace:pod-network-test-1120 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:35:28.280: INFO: >>> kubeConfig: /root/.kube/config I0316 13:35:28.306938 7 log.go:172] (0xc002be2370) (0xc0024f10e0) Create stream I0316 13:35:28.306968 7 log.go:172] (0xc002be2370) (0xc0024f10e0) Stream added, broadcasting: 1 I0316 13:35:28.309611 7 log.go:172] (0xc002be2370) Reply frame received for 1 I0316 13:35:28.309652 7 log.go:172] (0xc002be2370) (0xc0024f1220) Create stream I0316 13:35:28.309669 7 log.go:172] (0xc002be2370) (0xc0024f1220) Stream added, broadcasting: 3 I0316 13:35:28.310738 7 log.go:172] (0xc002be2370) Reply frame received for 3 I0316 13:35:28.310782 7 log.go:172] (0xc002be2370) (0xc0024f1400) Create stream I0316 13:35:28.310798 7 log.go:172] (0xc002be2370) (0xc0024f1400) Stream added, broadcasting: 5 I0316 13:35:28.311768 7 log.go:172] (0xc002be2370) Reply frame received for 5 I0316 13:35:28.371876 7 log.go:172] (0xc002be2370) Data frame received for 3 I0316 13:35:28.371924 7 log.go:172] (0xc0024f1220) (3) Data frame handling I0316 13:35:28.371959 7 log.go:172] (0xc0024f1220) (3) Data frame sent I0316 13:35:28.372500 7 log.go:172] (0xc002be2370) Data frame received for 5 I0316 13:35:28.372543 7 log.go:172] (0xc0024f1400) (5) Data frame handling I0316 13:35:28.372579 7 log.go:172] (0xc002be2370) Data frame received for 3 I0316 13:35:28.372617 7 log.go:172] (0xc0024f1220) (3) Data frame handling I0316 13:35:28.378610 7 log.go:172] (0xc002be2370) Data frame received for 1 I0316 13:35:28.378648 7 log.go:172] (0xc0024f10e0) (1) Data frame handling I0316 13:35:28.378675 7 log.go:172] (0xc0024f10e0) (1) Data frame sent I0316 13:35:28.378714 7 log.go:172] (0xc002be2370) (0xc0024f10e0) Stream removed, broadcasting: 1 I0316 13:35:28.378775 7 log.go:172] (0xc002be2370) Go away received I0316 13:35:28.378827 7 log.go:172] (0xc002be2370) (0xc0024f10e0) Stream removed, broadcasting: 1 I0316 13:35:28.378846 7 log.go:172] (0xc002be2370) (0xc0024f1220) Stream removed, broadcasting: 3 I0316 13:35:28.378860 7 log.go:172] (0xc002be2370) (0xc0024f1400) Stream removed, broadcasting: 5 Mar 16 13:35:28.378: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:35:28.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1120" for this suite. • [SLOW TEST:29.444 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":117,"skipped":1930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:35:28.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:35:30.008: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:35:32.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962529, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:35:34.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962530, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962529, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:35:37.253: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:35:37.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4440" for this suite. STEP: Destroying namespace "webhook-4440-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.796 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":118,"skipped":1961,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:35:38.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0316 13:35:40.922531 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:35:40.922: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:35:40.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-723" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":119,"skipped":1973,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:35:40.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 16 13:35:41.705: INFO: PodSpec: initContainers in spec.initContainers Mar 16 13:36:36.081: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d3a781ea-e8c2-44fa-ac2d-b025d2d04534", GenerateName:"", Namespace:"init-container-3800", SelfLink:"/api/v1/namespaces/init-container-3800/pods/pod-init-d3a781ea-e8c2-44fa-ac2d-b025d2d04534", UID:"c6dce2fa-2379-4946-bf98-dcbed41568c6", ResourceVersion:"277875", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63719962541, loc:(*time.Location)(0x7b1e080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"705466755"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-pkq8k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0018f0c80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pkq8k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pkq8k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-pkq8k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a91a38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001fe2af0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a91ac0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002a91ae0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002a91ae8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002a91aec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962542, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962542, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962542, loc:(*time.Location)(0x7b1e080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962541, loc:(*time.Location)(0x7b1e080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.37", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.37"}}, StartTime:(*v1.Time)(0xc00278fac0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00278fb00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fe2bd0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://af25c77ffe125619e272ee6564b2510d3e7967191627d5a235a5693f7efcd31a", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00278fb40), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00278fae0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002a91b6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:36:36.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3800" for this suite. • [SLOW TEST:55.236 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":120,"skipped":1989,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:36:36.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 16 13:36:36.531: INFO: Waiting up to 5m0s for pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c" in namespace "emptydir-1761" to be "Succeeded or Failed" Mar 16 13:36:36.619: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 87.941315ms Mar 16 13:36:38.769: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23818541s Mar 16 13:36:41.129: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.597244615s Mar 16 13:36:43.188: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.657066999s Mar 16 13:36:45.590: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.058523268s Mar 16 13:36:47.594: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.062566101s STEP: Saw pod success Mar 16 13:36:47.594: INFO: Pod "pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c" satisfied condition "Succeeded or Failed" Mar 16 13:36:47.596: INFO: Trying to get logs from node latest-worker pod pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c container test-container: STEP: delete the pod Mar 16 13:36:48.075: INFO: Waiting for pod pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c to disappear Mar 16 13:36:48.218: INFO: Pod pod-6ce33daf-be5f-448c-8d72-ae761cfe8d3c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:36:48.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1761" for this suite. • [SLOW TEST:12.081 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2006,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:36:48.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-e5e3c01b-033f-457c-bdce-8538bcdfe6fd STEP: Creating a pod to test consume configMaps Mar 16 13:36:49.028: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79" in namespace "projected-5101" to be "Succeeded or Failed" Mar 16 13:36:49.044: INFO: Pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79": Phase="Pending", Reason="", readiness=false. Elapsed: 16.341661ms Mar 16 13:36:51.207: INFO: Pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178896082s Mar 16 13:36:53.211: INFO: Pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182388453s Mar 16 13:36:55.214: INFO: Pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18622149s STEP: Saw pod success Mar 16 13:36:55.214: INFO: Pod "pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79" satisfied condition "Succeeded or Failed" Mar 16 13:36:55.217: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:36:55.430: INFO: Waiting for pod pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79 to disappear Mar 16 13:36:55.505: INFO: Pod pod-projected-configmaps-cf155834-d27e-4b92-bbf8-7c07cb517e79 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:36:55.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5101" for this suite. • [SLOW TEST:7.473 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2016,"failed":0} SSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:36:55.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-3709 STEP: creating replication controller nodeport-test in namespace services-3709 I0316 13:36:56.758154 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3709, replica count: 2 I0316 13:36:59.808613 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:37:02.808917 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:37:05.809285 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:37:05.809: INFO: Creating new exec pod Mar 16 13:37:12.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3709 execpodhdjgd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 16 13:37:13.035: INFO: stderr: "I0316 13:37:12.950328 745 log.go:172] (0xc000af6790) (0xc0005a5040) Create stream\nI0316 13:37:12.950385 745 log.go:172] (0xc000af6790) (0xc0005a5040) Stream added, broadcasting: 1\nI0316 13:37:12.952781 745 log.go:172] (0xc000af6790) Reply frame received for 1\nI0316 13:37:12.952825 745 log.go:172] (0xc000af6790) (0xc000ade000) Create stream\nI0316 13:37:12.952843 745 log.go:172] (0xc000af6790) (0xc000ade000) Stream added, broadcasting: 3\nI0316 13:37:12.954003 745 log.go:172] (0xc000af6790) Reply frame received for 3\nI0316 13:37:12.954039 745 log.go:172] (0xc000af6790) (0xc000a6e000) Create stream\nI0316 13:37:12.954049 745 log.go:172] (0xc000af6790) (0xc000a6e000) Stream added, broadcasting: 5\nI0316 13:37:12.954958 745 log.go:172] (0xc000af6790) Reply frame received for 5\nI0316 13:37:13.028583 745 log.go:172] (0xc000af6790) Data frame received for 5\nI0316 13:37:13.028610 745 log.go:172] (0xc000a6e000) (5) Data frame handling\nI0316 13:37:13.028627 745 log.go:172] (0xc000a6e000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0316 13:37:13.029239 745 log.go:172] (0xc000af6790) Data frame received for 5\nI0316 13:37:13.029254 745 log.go:172] (0xc000a6e000) (5) Data frame handling\nI0316 13:37:13.029262 745 log.go:172] (0xc000a6e000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0316 13:37:13.030116 745 log.go:172] (0xc000af6790) Data frame received for 5\nI0316 13:37:13.030146 745 log.go:172] (0xc000a6e000) (5) Data frame handling\nI0316 13:37:13.030268 745 log.go:172] (0xc000af6790) Data frame received for 3\nI0316 13:37:13.030298 745 log.go:172] (0xc000ade000) (3) Data frame handling\nI0316 13:37:13.031582 745 log.go:172] (0xc000af6790) Data frame received for 1\nI0316 13:37:13.031599 745 log.go:172] (0xc0005a5040) (1) Data frame handling\nI0316 13:37:13.031607 745 log.go:172] (0xc0005a5040) (1) Data frame sent\nI0316 13:37:13.031624 745 log.go:172] (0xc000af6790) (0xc0005a5040) Stream removed, broadcasting: 1\nI0316 13:37:13.031656 745 log.go:172] (0xc000af6790) Go away received\nI0316 13:37:13.032377 745 log.go:172] (0xc000af6790) (0xc0005a5040) Stream removed, broadcasting: 1\nI0316 13:37:13.032406 745 log.go:172] (0xc000af6790) (0xc000ade000) Stream removed, broadcasting: 3\nI0316 13:37:13.032428 745 log.go:172] (0xc000af6790) (0xc000a6e000) Stream removed, broadcasting: 5\n" Mar 16 13:37:13.035: INFO: stdout: "" Mar 16 13:37:13.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3709 execpodhdjgd -- /bin/sh -x -c nc -zv -t -w 2 10.96.132.26 80' Mar 16 13:37:13.226: INFO: stderr: "I0316 13:37:13.152959 765 log.go:172] (0xc00003a6e0) (0xc0002aeb40) Create stream\nI0316 13:37:13.153021 765 log.go:172] (0xc00003a6e0) (0xc0002aeb40) Stream added, broadcasting: 1\nI0316 13:37:13.155581 765 log.go:172] (0xc00003a6e0) Reply frame received for 1\nI0316 13:37:13.155626 765 log.go:172] (0xc00003a6e0) (0xc000aee000) Create stream\nI0316 13:37:13.155641 765 log.go:172] (0xc00003a6e0) (0xc000aee000) Stream added, broadcasting: 3\nI0316 13:37:13.156559 765 log.go:172] (0xc00003a6e0) Reply frame received for 3\nI0316 13:37:13.156593 765 log.go:172] (0xc00003a6e0) (0xc00090c000) Create stream\nI0316 13:37:13.156605 765 log.go:172] (0xc00003a6e0) (0xc00090c000) Stream added, broadcasting: 5\nI0316 13:37:13.157488 765 log.go:172] (0xc00003a6e0) Reply frame received for 5\nI0316 13:37:13.219644 765 log.go:172] (0xc00003a6e0) Data frame received for 3\nI0316 13:37:13.219680 765 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0316 13:37:13.219714 765 log.go:172] (0xc00090c000) (5) Data frame handling\nI0316 13:37:13.219727 765 log.go:172] (0xc00090c000) (5) Data frame sent\nI0316 13:37:13.219735 765 log.go:172] (0xc00003a6e0) Data frame received for 5\nI0316 13:37:13.219743 765 log.go:172] (0xc00090c000) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.132.26 80\nConnection to 10.96.132.26 80 port [tcp/http] succeeded!\nI0316 13:37:13.219765 765 log.go:172] (0xc000aee000) (3) Data frame handling\nI0316 13:37:13.221613 765 log.go:172] (0xc00003a6e0) Data frame received for 1\nI0316 13:37:13.221639 765 log.go:172] (0xc0002aeb40) (1) Data frame handling\nI0316 13:37:13.221664 765 log.go:172] (0xc0002aeb40) (1) Data frame sent\nI0316 13:37:13.221733 765 log.go:172] (0xc00003a6e0) (0xc0002aeb40) Stream removed, broadcasting: 1\nI0316 13:37:13.221753 765 log.go:172] (0xc00003a6e0) Go away received\nI0316 13:37:13.222069 765 log.go:172] (0xc00003a6e0) (0xc0002aeb40) Stream removed, broadcasting: 1\nI0316 13:37:13.222096 765 log.go:172] (0xc00003a6e0) (0xc000aee000) Stream removed, broadcasting: 3\nI0316 13:37:13.222104 765 log.go:172] (0xc00003a6e0) (0xc00090c000) Stream removed, broadcasting: 5\n" Mar 16 13:37:13.226: INFO: stdout: "" Mar 16 13:37:13.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3709 execpodhdjgd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 32631' Mar 16 13:37:13.402: INFO: stderr: "I0316 13:37:13.347438 786 log.go:172] (0xc0009b80b0) (0xc00035ec80) Create stream\nI0316 13:37:13.347494 786 log.go:172] (0xc0009b80b0) (0xc00035ec80) Stream added, broadcasting: 1\nI0316 13:37:13.349958 786 log.go:172] (0xc0009b80b0) Reply frame received for 1\nI0316 13:37:13.350032 786 log.go:172] (0xc0009b80b0) (0xc000974000) Create stream\nI0316 13:37:13.350062 786 log.go:172] (0xc0009b80b0) (0xc000974000) Stream added, broadcasting: 3\nI0316 13:37:13.350969 786 log.go:172] (0xc0009b80b0) Reply frame received for 3\nI0316 13:37:13.351011 786 log.go:172] (0xc0009b80b0) (0xc0008de000) Create stream\nI0316 13:37:13.351024 786 log.go:172] (0xc0009b80b0) (0xc0008de000) Stream added, broadcasting: 5\nI0316 13:37:13.351783 786 log.go:172] (0xc0009b80b0) Reply frame received for 5\nI0316 13:37:13.396849 786 log.go:172] (0xc0009b80b0) Data frame received for 3\nI0316 13:37:13.396887 786 log.go:172] (0xc000974000) (3) Data frame handling\nI0316 13:37:13.396978 786 log.go:172] (0xc0009b80b0) Data frame received for 5\nI0316 13:37:13.397016 786 log.go:172] (0xc0008de000) (5) Data frame handling\nI0316 13:37:13.397039 786 log.go:172] (0xc0008de000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.13 32631\nConnection to 172.17.0.13 32631 port [tcp/32631] succeeded!\nI0316 13:37:13.397096 786 log.go:172] (0xc0009b80b0) Data frame received for 5\nI0316 13:37:13.397183 786 log.go:172] (0xc0008de000) (5) Data frame handling\nI0316 13:37:13.398682 786 log.go:172] (0xc0009b80b0) Data frame received for 1\nI0316 13:37:13.398696 786 log.go:172] (0xc00035ec80) (1) Data frame handling\nI0316 13:37:13.398705 786 log.go:172] (0xc00035ec80) (1) Data frame sent\nI0316 13:37:13.398941 786 log.go:172] (0xc0009b80b0) (0xc00035ec80) Stream removed, broadcasting: 1\nI0316 13:37:13.399183 786 log.go:172] (0xc0009b80b0) Go away received\nI0316 13:37:13.399228 786 log.go:172] (0xc0009b80b0) (0xc00035ec80) Stream removed, broadcasting: 1\nI0316 13:37:13.399242 786 log.go:172] (0xc0009b80b0) (0xc000974000) Stream removed, broadcasting: 3\nI0316 13:37:13.399249 786 log.go:172] (0xc0009b80b0) (0xc0008de000) Stream removed, broadcasting: 5\n" Mar 16 13:37:13.402: INFO: stdout: "" Mar 16 13:37:13.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3709 execpodhdjgd -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 32631' Mar 16 13:37:13.577: INFO: stderr: "I0316 13:37:13.517369 807 log.go:172] (0xc000990b00) (0xc000976460) Create stream\nI0316 13:37:13.517413 807 log.go:172] (0xc000990b00) (0xc000976460) Stream added, broadcasting: 1\nI0316 13:37:13.520994 807 log.go:172] (0xc000990b00) Reply frame received for 1\nI0316 13:37:13.521031 807 log.go:172] (0xc000990b00) (0xc000976000) Create stream\nI0316 13:37:13.521043 807 log.go:172] (0xc000990b00) (0xc000976000) Stream added, broadcasting: 3\nI0316 13:37:13.522104 807 log.go:172] (0xc000990b00) Reply frame received for 3\nI0316 13:37:13.522129 807 log.go:172] (0xc000990b00) (0xc0004cefa0) Create stream\nI0316 13:37:13.522138 807 log.go:172] (0xc000990b00) (0xc0004cefa0) Stream added, broadcasting: 5\nI0316 13:37:13.522940 807 log.go:172] (0xc000990b00) Reply frame received for 5\nI0316 13:37:13.572461 807 log.go:172] (0xc000990b00) Data frame received for 5\nI0316 13:37:13.572502 807 log.go:172] (0xc0004cefa0) (5) Data frame handling\nI0316 13:37:13.572530 807 log.go:172] (0xc0004cefa0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 32631\nI0316 13:37:13.572910 807 log.go:172] (0xc000990b00) Data frame received for 5\nI0316 13:37:13.572957 807 log.go:172] (0xc0004cefa0) (5) Data frame handling\nI0316 13:37:13.572982 807 log.go:172] (0xc0004cefa0) (5) Data frame sent\nConnection to 172.17.0.12 32631 port [tcp/32631] succeeded!\nI0316 13:37:13.573089 807 log.go:172] (0xc000990b00) Data frame received for 5\nI0316 13:37:13.573102 807 log.go:172] (0xc0004cefa0) (5) Data frame handling\nI0316 13:37:13.573270 807 log.go:172] (0xc000990b00) Data frame received for 3\nI0316 13:37:13.573281 807 log.go:172] (0xc000976000) (3) Data frame handling\nI0316 13:37:13.574398 807 log.go:172] (0xc000990b00) Data frame received for 1\nI0316 13:37:13.574413 807 log.go:172] (0xc000976460) (1) Data frame handling\nI0316 13:37:13.574423 807 log.go:172] (0xc000976460) (1) Data frame sent\nI0316 13:37:13.574433 807 log.go:172] (0xc000990b00) (0xc000976460) Stream removed, broadcasting: 1\nI0316 13:37:13.574444 807 log.go:172] (0xc000990b00) Go away received\nI0316 13:37:13.574684 807 log.go:172] (0xc000990b00) (0xc000976460) Stream removed, broadcasting: 1\nI0316 13:37:13.574697 807 log.go:172] (0xc000990b00) (0xc000976000) Stream removed, broadcasting: 3\nI0316 13:37:13.574702 807 log.go:172] (0xc000990b00) (0xc0004cefa0) Stream removed, broadcasting: 5\n" Mar 16 13:37:13.577: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:37:13.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3709" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:18.027 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":123,"skipped":2019,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:37:13.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Mar 16 13:37:13.872: INFO: Waiting up to 5m0s for pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9" in namespace "containers-4661" to be "Succeeded or Failed" Mar 16 13:37:13.896: INFO: Pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9": Phase="Pending", Reason="", readiness=false. Elapsed: 24.077684ms Mar 16 13:37:15.965: INFO: Pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093128217s Mar 16 13:37:17.970: INFO: Pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097454962s Mar 16 13:37:20.898: INFO: Pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.026122736s STEP: Saw pod success Mar 16 13:37:20.898: INFO: Pod "client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9" satisfied condition "Succeeded or Failed" Mar 16 13:37:20.902: INFO: Trying to get logs from node latest-worker2 pod client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9 container test-container: STEP: delete the pod Mar 16 13:37:22.356: INFO: Waiting for pod client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9 to disappear Mar 16 13:37:22.584: INFO: Pod client-containers-2f8c29ee-78ae-4a99-aa50-0dd5fd5701c9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:37:22.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4661" for this suite. • [SLOW TEST:8.892 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2026,"failed":0} SSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:37:22.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 16 13:37:39.453: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.453: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.495637 7 log.go:172] (0xc002c74a50) (0xc000c5b4a0) Create stream I0316 13:37:39.495673 7 log.go:172] (0xc002c74a50) (0xc000c5b4a0) Stream added, broadcasting: 1 I0316 13:37:39.497723 7 log.go:172] (0xc002c74a50) Reply frame received for 1 I0316 13:37:39.497753 7 log.go:172] (0xc002c74a50) (0xc001df4000) Create stream I0316 13:37:39.497762 7 log.go:172] (0xc002c74a50) (0xc001df4000) Stream added, broadcasting: 3 I0316 13:37:39.498604 7 log.go:172] (0xc002c74a50) Reply frame received for 3 I0316 13:37:39.498649 7 log.go:172] (0xc002c74a50) (0xc001df4320) Create stream I0316 13:37:39.498665 7 log.go:172] (0xc002c74a50) (0xc001df4320) Stream added, broadcasting: 5 I0316 13:37:39.499631 7 log.go:172] (0xc002c74a50) Reply frame received for 5 I0316 13:37:39.552566 7 log.go:172] (0xc002c74a50) Data frame received for 5 I0316 13:37:39.552589 7 log.go:172] (0xc001df4320) (5) Data frame handling I0316 13:37:39.552618 7 log.go:172] (0xc002c74a50) Data frame received for 3 I0316 13:37:39.552639 7 log.go:172] (0xc001df4000) (3) Data frame handling I0316 13:37:39.552651 7 log.go:172] (0xc001df4000) (3) Data frame sent I0316 13:37:39.552668 7 log.go:172] (0xc002c74a50) Data frame received for 3 I0316 13:37:39.552674 7 log.go:172] (0xc001df4000) (3) Data frame handling I0316 13:37:39.556300 7 log.go:172] (0xc002c74a50) Data frame received for 1 I0316 13:37:39.556328 7 log.go:172] (0xc000c5b4a0) (1) Data frame handling I0316 13:37:39.556351 7 log.go:172] (0xc000c5b4a0) (1) Data frame sent I0316 13:37:39.556369 7 log.go:172] (0xc002c74a50) (0xc000c5b4a0) Stream removed, broadcasting: 1 I0316 13:37:39.556383 7 log.go:172] (0xc002c74a50) Go away received I0316 13:37:39.556535 7 log.go:172] (0xc002c74a50) (0xc000c5b4a0) Stream removed, broadcasting: 1 I0316 13:37:39.556552 7 log.go:172] (0xc002c74a50) (0xc001df4000) Stream removed, broadcasting: 3 I0316 13:37:39.556561 7 log.go:172] (0xc002c74a50) (0xc001df4320) Stream removed, broadcasting: 5 Mar 16 13:37:39.556: INFO: Exec stderr: "" Mar 16 13:37:39.556: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.556: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.578115 7 log.go:172] (0xc0028af1e0) (0xc0024f03c0) Create stream I0316 13:37:39.578141 7 log.go:172] (0xc0028af1e0) (0xc0024f03c0) Stream added, broadcasting: 1 I0316 13:37:39.579948 7 log.go:172] (0xc0028af1e0) Reply frame received for 1 I0316 13:37:39.579983 7 log.go:172] (0xc0028af1e0) (0xc001aad400) Create stream I0316 13:37:39.579992 7 log.go:172] (0xc0028af1e0) (0xc001aad400) Stream added, broadcasting: 3 I0316 13:37:39.580612 7 log.go:172] (0xc0028af1e0) Reply frame received for 3 I0316 13:37:39.580641 7 log.go:172] (0xc0028af1e0) (0xc0024f0460) Create stream I0316 13:37:39.580658 7 log.go:172] (0xc0028af1e0) (0xc0024f0460) Stream added, broadcasting: 5 I0316 13:37:39.581524 7 log.go:172] (0xc0028af1e0) Reply frame received for 5 I0316 13:37:39.639983 7 log.go:172] (0xc0028af1e0) Data frame received for 5 I0316 13:37:39.640006 7 log.go:172] (0xc0028af1e0) Data frame received for 3 I0316 13:37:39.640018 7 log.go:172] (0xc001aad400) (3) Data frame handling I0316 13:37:39.640027 7 log.go:172] (0xc001aad400) (3) Data frame sent I0316 13:37:39.640034 7 log.go:172] (0xc0028af1e0) Data frame received for 3 I0316 13:37:39.640052 7 log.go:172] (0xc0024f0460) (5) Data frame handling I0316 13:37:39.640095 7 log.go:172] (0xc001aad400) (3) Data frame handling I0316 13:37:39.642325 7 log.go:172] (0xc0028af1e0) Data frame received for 1 I0316 13:37:39.642336 7 log.go:172] (0xc0024f03c0) (1) Data frame handling I0316 13:37:39.642346 7 log.go:172] (0xc0024f03c0) (1) Data frame sent I0316 13:37:39.642354 7 log.go:172] (0xc0028af1e0) (0xc0024f03c0) Stream removed, broadcasting: 1 I0316 13:37:39.642439 7 log.go:172] (0xc0028af1e0) (0xc0024f03c0) Stream removed, broadcasting: 1 I0316 13:37:39.642456 7 log.go:172] (0xc0028af1e0) (0xc001aad400) Stream removed, broadcasting: 3 I0316 13:37:39.642501 7 log.go:172] (0xc0028af1e0) Go away received I0316 13:37:39.642579 7 log.go:172] (0xc0028af1e0) (0xc0024f0460) Stream removed, broadcasting: 5 Mar 16 13:37:39.642: INFO: Exec stderr: "" Mar 16 13:37:39.642: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.642: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.673948 7 log.go:172] (0xc002e3a790) (0xc001aad860) Create stream I0316 13:37:39.673987 7 log.go:172] (0xc002e3a790) (0xc001aad860) Stream added, broadcasting: 1 I0316 13:37:39.676356 7 log.go:172] (0xc002e3a790) Reply frame received for 1 I0316 13:37:39.676382 7 log.go:172] (0xc002e3a790) (0xc0024f0640) Create stream I0316 13:37:39.676393 7 log.go:172] (0xc002e3a790) (0xc0024f0640) Stream added, broadcasting: 3 I0316 13:37:39.677106 7 log.go:172] (0xc002e3a790) Reply frame received for 3 I0316 13:37:39.677240 7 log.go:172] (0xc002e3a790) (0xc001df4460) Create stream I0316 13:37:39.677250 7 log.go:172] (0xc002e3a790) (0xc001df4460) Stream added, broadcasting: 5 I0316 13:37:39.678152 7 log.go:172] (0xc002e3a790) Reply frame received for 5 I0316 13:37:39.735765 7 log.go:172] (0xc002e3a790) Data frame received for 5 I0316 13:37:39.735808 7 log.go:172] (0xc002e3a790) Data frame received for 3 I0316 13:37:39.735846 7 log.go:172] (0xc0024f0640) (3) Data frame handling I0316 13:37:39.735862 7 log.go:172] (0xc0024f0640) (3) Data frame sent I0316 13:37:39.735877 7 log.go:172] (0xc002e3a790) Data frame received for 3 I0316 13:37:39.735889 7 log.go:172] (0xc0024f0640) (3) Data frame handling I0316 13:37:39.735906 7 log.go:172] (0xc001df4460) (5) Data frame handling I0316 13:37:39.736968 7 log.go:172] (0xc002e3a790) Data frame received for 1 I0316 13:37:39.737000 7 log.go:172] (0xc001aad860) (1) Data frame handling I0316 13:37:39.737022 7 log.go:172] (0xc001aad860) (1) Data frame sent I0316 13:37:39.737269 7 log.go:172] (0xc002e3a790) (0xc001aad860) Stream removed, broadcasting: 1 I0316 13:37:39.737319 7 log.go:172] (0xc002e3a790) Go away received I0316 13:37:39.737417 7 log.go:172] (0xc002e3a790) (0xc001aad860) Stream removed, broadcasting: 1 I0316 13:37:39.737446 7 log.go:172] (0xc002e3a790) (0xc0024f0640) Stream removed, broadcasting: 3 I0316 13:37:39.737456 7 log.go:172] (0xc002e3a790) (0xc001df4460) Stream removed, broadcasting: 5 Mar 16 13:37:39.737: INFO: Exec stderr: "" Mar 16 13:37:39.737: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.737: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.767223 7 log.go:172] (0xc0028af810) (0xc0024f0be0) Create stream I0316 13:37:39.767246 7 log.go:172] (0xc0028af810) (0xc0024f0be0) Stream added, broadcasting: 1 I0316 13:37:39.769572 7 log.go:172] (0xc0028af810) Reply frame received for 1 I0316 13:37:39.769592 7 log.go:172] (0xc0028af810) (0xc0024f0c80) Create stream I0316 13:37:39.769601 7 log.go:172] (0xc0028af810) (0xc0024f0c80) Stream added, broadcasting: 3 I0316 13:37:39.770607 7 log.go:172] (0xc0028af810) Reply frame received for 3 I0316 13:37:39.770672 7 log.go:172] (0xc0028af810) (0xc000c5b900) Create stream I0316 13:37:39.770730 7 log.go:172] (0xc0028af810) (0xc000c5b900) Stream added, broadcasting: 5 I0316 13:37:39.771655 7 log.go:172] (0xc0028af810) Reply frame received for 5 I0316 13:37:39.823774 7 log.go:172] (0xc0028af810) Data frame received for 5 I0316 13:37:39.823814 7 log.go:172] (0xc000c5b900) (5) Data frame handling I0316 13:37:39.823886 7 log.go:172] (0xc0028af810) Data frame received for 3 I0316 13:37:39.823948 7 log.go:172] (0xc0024f0c80) (3) Data frame handling I0316 13:37:39.824006 7 log.go:172] (0xc0024f0c80) (3) Data frame sent I0316 13:37:39.824043 7 log.go:172] (0xc0028af810) Data frame received for 3 I0316 13:37:39.824060 7 log.go:172] (0xc0024f0c80) (3) Data frame handling I0316 13:37:39.825481 7 log.go:172] (0xc0028af810) Data frame received for 1 I0316 13:37:39.825517 7 log.go:172] (0xc0024f0be0) (1) Data frame handling I0316 13:37:39.825570 7 log.go:172] (0xc0024f0be0) (1) Data frame sent I0316 13:37:39.825599 7 log.go:172] (0xc0028af810) (0xc0024f0be0) Stream removed, broadcasting: 1 I0316 13:37:39.825636 7 log.go:172] (0xc0028af810) Go away received I0316 13:37:39.825707 7 log.go:172] (0xc0028af810) (0xc0024f0be0) Stream removed, broadcasting: 1 I0316 13:37:39.825727 7 log.go:172] (0xc0028af810) (0xc0024f0c80) Stream removed, broadcasting: 3 I0316 13:37:39.825733 7 log.go:172] (0xc0028af810) (0xc000c5b900) Stream removed, broadcasting: 5 Mar 16 13:37:39.825: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 16 13:37:39.825: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.825: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.852919 7 log.go:172] (0xc002c75130) (0xc000c5bcc0) Create stream I0316 13:37:39.852940 7 log.go:172] (0xc002c75130) (0xc000c5bcc0) Stream added, broadcasting: 1 I0316 13:37:39.855088 7 log.go:172] (0xc002c75130) Reply frame received for 1 I0316 13:37:39.855113 7 log.go:172] (0xc002c75130) (0xc002743ea0) Create stream I0316 13:37:39.855126 7 log.go:172] (0xc002c75130) (0xc002743ea0) Stream added, broadcasting: 3 I0316 13:37:39.855822 7 log.go:172] (0xc002c75130) Reply frame received for 3 I0316 13:37:39.855848 7 log.go:172] (0xc002c75130) (0xc001aad900) Create stream I0316 13:37:39.855855 7 log.go:172] (0xc002c75130) (0xc001aad900) Stream added, broadcasting: 5 I0316 13:37:39.856577 7 log.go:172] (0xc002c75130) Reply frame received for 5 I0316 13:37:39.933874 7 log.go:172] (0xc002c75130) Data frame received for 3 I0316 13:37:39.933896 7 log.go:172] (0xc002743ea0) (3) Data frame handling I0316 13:37:39.933908 7 log.go:172] (0xc002743ea0) (3) Data frame sent I0316 13:37:39.934011 7 log.go:172] (0xc002c75130) Data frame received for 5 I0316 13:37:39.934035 7 log.go:172] (0xc001aad900) (5) Data frame handling I0316 13:37:39.934074 7 log.go:172] (0xc002c75130) Data frame received for 3 I0316 13:37:39.934144 7 log.go:172] (0xc002743ea0) (3) Data frame handling I0316 13:37:39.935807 7 log.go:172] (0xc002c75130) Data frame received for 1 I0316 13:37:39.935822 7 log.go:172] (0xc000c5bcc0) (1) Data frame handling I0316 13:37:39.935828 7 log.go:172] (0xc000c5bcc0) (1) Data frame sent I0316 13:37:39.935922 7 log.go:172] (0xc002c75130) (0xc000c5bcc0) Stream removed, broadcasting: 1 I0316 13:37:39.936069 7 log.go:172] (0xc002c75130) (0xc000c5bcc0) Stream removed, broadcasting: 1 I0316 13:37:39.936122 7 log.go:172] (0xc002c75130) (0xc002743ea0) Stream removed, broadcasting: 3 I0316 13:37:39.936156 7 log.go:172] (0xc002c75130) (0xc001aad900) Stream removed, broadcasting: 5 Mar 16 13:37:39.936: INFO: Exec stderr: "" I0316 13:37:39.936254 7 log.go:172] (0xc002c75130) Go away received Mar 16 13:37:39.936: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:39.936: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:39.970704 7 log.go:172] (0xc002d90840) (0xc0021f8280) Create stream I0316 13:37:39.970727 7 log.go:172] (0xc002d90840) (0xc0021f8280) Stream added, broadcasting: 1 I0316 13:37:39.972923 7 log.go:172] (0xc002d90840) Reply frame received for 1 I0316 13:37:39.972954 7 log.go:172] (0xc002d90840) (0xc001df4500) Create stream I0316 13:37:39.972971 7 log.go:172] (0xc002d90840) (0xc001df4500) Stream added, broadcasting: 3 I0316 13:37:39.974113 7 log.go:172] (0xc002d90840) Reply frame received for 3 I0316 13:37:39.974133 7 log.go:172] (0xc002d90840) (0xc0021f8320) Create stream I0316 13:37:39.974141 7 log.go:172] (0xc002d90840) (0xc0021f8320) Stream added, broadcasting: 5 I0316 13:37:39.975152 7 log.go:172] (0xc002d90840) Reply frame received for 5 I0316 13:37:40.038395 7 log.go:172] (0xc002d90840) Data frame received for 5 I0316 13:37:40.038451 7 log.go:172] (0xc0021f8320) (5) Data frame handling I0316 13:37:40.038499 7 log.go:172] (0xc002d90840) Data frame received for 3 I0316 13:37:40.038522 7 log.go:172] (0xc001df4500) (3) Data frame handling I0316 13:37:40.038547 7 log.go:172] (0xc001df4500) (3) Data frame sent I0316 13:37:40.038561 7 log.go:172] (0xc002d90840) Data frame received for 3 I0316 13:37:40.038573 7 log.go:172] (0xc001df4500) (3) Data frame handling I0316 13:37:40.039771 7 log.go:172] (0xc002d90840) Data frame received for 1 I0316 13:37:40.039788 7 log.go:172] (0xc0021f8280) (1) Data frame handling I0316 13:37:40.039801 7 log.go:172] (0xc0021f8280) (1) Data frame sent I0316 13:37:40.039810 7 log.go:172] (0xc002d90840) (0xc0021f8280) Stream removed, broadcasting: 1 I0316 13:37:40.039877 7 log.go:172] (0xc002d90840) Go away received I0316 13:37:40.039910 7 log.go:172] (0xc002d90840) (0xc0021f8280) Stream removed, broadcasting: 1 I0316 13:37:40.039926 7 log.go:172] (0xc002d90840) (0xc001df4500) Stream removed, broadcasting: 3 I0316 13:37:40.039932 7 log.go:172] (0xc002d90840) (0xc0021f8320) Stream removed, broadcasting: 5 Mar 16 13:37:40.039: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 16 13:37:40.039: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:40.039: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:40.071015 7 log.go:172] (0xc002be24d0) (0xc001df48c0) Create stream I0316 13:37:40.071044 7 log.go:172] (0xc002be24d0) (0xc001df48c0) Stream added, broadcasting: 1 I0316 13:37:40.072826 7 log.go:172] (0xc002be24d0) Reply frame received for 1 I0316 13:37:40.072863 7 log.go:172] (0xc002be24d0) (0xc0021f83c0) Create stream I0316 13:37:40.072881 7 log.go:172] (0xc002be24d0) (0xc0021f83c0) Stream added, broadcasting: 3 I0316 13:37:40.073916 7 log.go:172] (0xc002be24d0) Reply frame received for 3 I0316 13:37:40.073955 7 log.go:172] (0xc002be24d0) (0xc0021f8460) Create stream I0316 13:37:40.073969 7 log.go:172] (0xc002be24d0) (0xc0021f8460) Stream added, broadcasting: 5 I0316 13:37:40.074992 7 log.go:172] (0xc002be24d0) Reply frame received for 5 I0316 13:37:40.133910 7 log.go:172] (0xc002be24d0) Data frame received for 5 I0316 13:37:40.133938 7 log.go:172] (0xc0021f8460) (5) Data frame handling I0316 13:37:40.133973 7 log.go:172] (0xc002be24d0) Data frame received for 3 I0316 13:37:40.134014 7 log.go:172] (0xc0021f83c0) (3) Data frame handling I0316 13:37:40.134045 7 log.go:172] (0xc0021f83c0) (3) Data frame sent I0316 13:37:40.134067 7 log.go:172] (0xc002be24d0) Data frame received for 3 I0316 13:37:40.134086 7 log.go:172] (0xc0021f83c0) (3) Data frame handling I0316 13:37:40.135783 7 log.go:172] (0xc002be24d0) Data frame received for 1 I0316 13:37:40.135811 7 log.go:172] (0xc001df48c0) (1) Data frame handling I0316 13:37:40.135823 7 log.go:172] (0xc001df48c0) (1) Data frame sent I0316 13:37:40.135836 7 log.go:172] (0xc002be24d0) (0xc001df48c0) Stream removed, broadcasting: 1 I0316 13:37:40.135854 7 log.go:172] (0xc002be24d0) Go away received I0316 13:37:40.136030 7 log.go:172] (0xc002be24d0) (0xc001df48c0) Stream removed, broadcasting: 1 I0316 13:37:40.136072 7 log.go:172] (0xc002be24d0) (0xc0021f83c0) Stream removed, broadcasting: 3 I0316 13:37:40.136116 7 log.go:172] (0xc002be24d0) (0xc0021f8460) Stream removed, broadcasting: 5 Mar 16 13:37:40.136: INFO: Exec stderr: "" Mar 16 13:37:40.136: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:40.136: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:40.164243 7 log.go:172] (0xc002d90e70) (0xc0021f8820) Create stream I0316 13:37:40.164270 7 log.go:172] (0xc002d90e70) (0xc0021f8820) Stream added, broadcasting: 1 I0316 13:37:40.166149 7 log.go:172] (0xc002d90e70) Reply frame received for 1 I0316 13:37:40.166197 7 log.go:172] (0xc002d90e70) (0xc000c5bf40) Create stream I0316 13:37:40.166231 7 log.go:172] (0xc002d90e70) (0xc000c5bf40) Stream added, broadcasting: 3 I0316 13:37:40.167126 7 log.go:172] (0xc002d90e70) Reply frame received for 3 I0316 13:37:40.167168 7 log.go:172] (0xc002d90e70) (0xc001df4960) Create stream I0316 13:37:40.167184 7 log.go:172] (0xc002d90e70) (0xc001df4960) Stream added, broadcasting: 5 I0316 13:37:40.168226 7 log.go:172] (0xc002d90e70) Reply frame received for 5 I0316 13:37:40.222641 7 log.go:172] (0xc002d90e70) Data frame received for 5 I0316 13:37:40.222687 7 log.go:172] (0xc001df4960) (5) Data frame handling I0316 13:37:40.222717 7 log.go:172] (0xc002d90e70) Data frame received for 3 I0316 13:37:40.222736 7 log.go:172] (0xc000c5bf40) (3) Data frame handling I0316 13:37:40.222759 7 log.go:172] (0xc000c5bf40) (3) Data frame sent I0316 13:37:40.222770 7 log.go:172] (0xc002d90e70) Data frame received for 3 I0316 13:37:40.222781 7 log.go:172] (0xc000c5bf40) (3) Data frame handling I0316 13:37:40.224373 7 log.go:172] (0xc002d90e70) Data frame received for 1 I0316 13:37:40.224407 7 log.go:172] (0xc0021f8820) (1) Data frame handling I0316 13:37:40.224426 7 log.go:172] (0xc0021f8820) (1) Data frame sent I0316 13:37:40.224470 7 log.go:172] (0xc002d90e70) (0xc0021f8820) Stream removed, broadcasting: 1 I0316 13:37:40.224508 7 log.go:172] (0xc002d90e70) Go away received I0316 13:37:40.224630 7 log.go:172] (0xc002d90e70) (0xc0021f8820) Stream removed, broadcasting: 1 I0316 13:37:40.224709 7 log.go:172] (0xc002d90e70) (0xc000c5bf40) Stream removed, broadcasting: 3 I0316 13:37:40.224774 7 log.go:172] (0xc002d90e70) (0xc001df4960) Stream removed, broadcasting: 5 Mar 16 13:37:40.224: INFO: Exec stderr: "" Mar 16 13:37:40.224: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:40.224: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:40.326280 7 log.go:172] (0xc002c75760) (0xc000fde140) Create stream I0316 13:37:40.326321 7 log.go:172] (0xc002c75760) (0xc000fde140) Stream added, broadcasting: 1 I0316 13:37:40.327807 7 log.go:172] (0xc002c75760) Reply frame received for 1 I0316 13:37:40.327847 7 log.go:172] (0xc002c75760) (0xc000fde3c0) Create stream I0316 13:37:40.327862 7 log.go:172] (0xc002c75760) (0xc000fde3c0) Stream added, broadcasting: 3 I0316 13:37:40.328602 7 log.go:172] (0xc002c75760) Reply frame received for 3 I0316 13:37:40.328647 7 log.go:172] (0xc002c75760) (0xc001df4c80) Create stream I0316 13:37:40.328664 7 log.go:172] (0xc002c75760) (0xc001df4c80) Stream added, broadcasting: 5 I0316 13:37:40.329722 7 log.go:172] (0xc002c75760) Reply frame received for 5 I0316 13:37:40.388254 7 log.go:172] (0xc002c75760) Data frame received for 5 I0316 13:37:40.388278 7 log.go:172] (0xc001df4c80) (5) Data frame handling I0316 13:37:40.388331 7 log.go:172] (0xc002c75760) Data frame received for 3 I0316 13:37:40.388357 7 log.go:172] (0xc000fde3c0) (3) Data frame handling I0316 13:37:40.388385 7 log.go:172] (0xc000fde3c0) (3) Data frame sent I0316 13:37:40.388393 7 log.go:172] (0xc002c75760) Data frame received for 3 I0316 13:37:40.388398 7 log.go:172] (0xc000fde3c0) (3) Data frame handling I0316 13:37:40.390484 7 log.go:172] (0xc002c75760) Data frame received for 1 I0316 13:37:40.390507 7 log.go:172] (0xc000fde140) (1) Data frame handling I0316 13:37:40.390528 7 log.go:172] (0xc000fde140) (1) Data frame sent I0316 13:37:40.390543 7 log.go:172] (0xc002c75760) (0xc000fde140) Stream removed, broadcasting: 1 I0316 13:37:40.390560 7 log.go:172] (0xc002c75760) Go away received I0316 13:37:40.390722 7 log.go:172] (0xc002c75760) (0xc000fde140) Stream removed, broadcasting: 1 I0316 13:37:40.390753 7 log.go:172] (0xc002c75760) (0xc000fde3c0) Stream removed, broadcasting: 3 I0316 13:37:40.390767 7 log.go:172] (0xc002c75760) (0xc001df4c80) Stream removed, broadcasting: 5 Mar 16 13:37:40.390: INFO: Exec stderr: "" Mar 16 13:37:40.390: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9054 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:37:40.390: INFO: >>> kubeConfig: /root/.kube/config I0316 13:37:40.428630 7 log.go:172] (0xc0028afe40) (0xc0024f1400) Create stream I0316 13:37:40.428651 7 log.go:172] (0xc0028afe40) (0xc0024f1400) Stream added, broadcasting: 1 I0316 13:37:40.431198 7 log.go:172] (0xc0028afe40) Reply frame received for 1 I0316 13:37:40.431225 7 log.go:172] (0xc0028afe40) (0xc001aada40) Create stream I0316 13:37:40.431248 7 log.go:172] (0xc0028afe40) (0xc001aada40) Stream added, broadcasting: 3 I0316 13:37:40.432181 7 log.go:172] (0xc0028afe40) Reply frame received for 3 I0316 13:37:40.432216 7 log.go:172] (0xc0028afe40) (0xc001aadcc0) Create stream I0316 13:37:40.432228 7 log.go:172] (0xc0028afe40) (0xc001aadcc0) Stream added, broadcasting: 5 I0316 13:37:40.433275 7 log.go:172] (0xc0028afe40) Reply frame received for 5 I0316 13:37:40.489799 7 log.go:172] (0xc0028afe40) Data frame received for 5 I0316 13:37:40.489824 7 log.go:172] (0xc001aadcc0) (5) Data frame handling I0316 13:37:40.489863 7 log.go:172] (0xc0028afe40) Data frame received for 3 I0316 13:37:40.489900 7 log.go:172] (0xc001aada40) (3) Data frame handling I0316 13:37:40.489929 7 log.go:172] (0xc001aada40) (3) Data frame sent I0316 13:37:40.489949 7 log.go:172] (0xc0028afe40) Data frame received for 3 I0316 13:37:40.489962 7 log.go:172] (0xc001aada40) (3) Data frame handling I0316 13:37:40.491245 7 log.go:172] (0xc0028afe40) Data frame received for 1 I0316 13:37:40.491288 7 log.go:172] (0xc0024f1400) (1) Data frame handling I0316 13:37:40.491318 7 log.go:172] (0xc0024f1400) (1) Data frame sent I0316 13:37:40.491347 7 log.go:172] (0xc0028afe40) (0xc0024f1400) Stream removed, broadcasting: 1 I0316 13:37:40.491375 7 log.go:172] (0xc0028afe40) Go away received I0316 13:37:40.491428 7 log.go:172] (0xc0028afe40) (0xc0024f1400) Stream removed, broadcasting: 1 I0316 13:37:40.491449 7 log.go:172] (0xc0028afe40) (0xc001aada40) Stream removed, broadcasting: 3 I0316 13:37:40.491456 7 log.go:172] (0xc0028afe40) (0xc001aadcc0) Stream removed, broadcasting: 5 Mar 16 13:37:40.491: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:37:40.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9054" for this suite. • [SLOW TEST:17.861 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2030,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:37:40.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-7b68f472-f136-4d58-96a5-3ae35b82b9fc STEP: Creating secret with name secret-projected-all-test-volume-5261f606-3863-4b37-a1ed-0ba98f30ffc5 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 16 13:37:40.958: INFO: Waiting up to 5m0s for pod "projected-volume-59682204-6961-4307-b713-1579ba7d48c7" in namespace "projected-9856" to be "Succeeded or Failed" Mar 16 13:37:40.980: INFO: Pod "projected-volume-59682204-6961-4307-b713-1579ba7d48c7": Phase="Pending", Reason="", readiness=false. Elapsed: 22.560064ms Mar 16 13:37:43.106: INFO: Pod "projected-volume-59682204-6961-4307-b713-1579ba7d48c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148180662s Mar 16 13:37:45.110: INFO: Pod "projected-volume-59682204-6961-4307-b713-1579ba7d48c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151892787s STEP: Saw pod success Mar 16 13:37:45.110: INFO: Pod "projected-volume-59682204-6961-4307-b713-1579ba7d48c7" satisfied condition "Succeeded or Failed" Mar 16 13:37:45.112: INFO: Trying to get logs from node latest-worker pod projected-volume-59682204-6961-4307-b713-1579ba7d48c7 container projected-all-volume-test: STEP: delete the pod Mar 16 13:37:45.436: INFO: Waiting for pod projected-volume-59682204-6961-4307-b713-1579ba7d48c7 to disappear Mar 16 13:37:45.574: INFO: Pod projected-volume-59682204-6961-4307-b713-1579ba7d48c7 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:37:45.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9856" for this suite. • [SLOW TEST:5.166 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2036,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:37:45.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 16 13:37:46.989: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 16 13:37:48.999: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:37:51.088: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962667, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962666, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:37:54.056: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:37:54.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:37:55.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8646" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:10.701 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":127,"skipped":2056,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:37:56.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:37:57.732: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c" in namespace "security-context-test-6127" to be "Succeeded or Failed" Mar 16 13:37:57.977: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 245.53152ms Mar 16 13:38:00.472: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.739673845s Mar 16 13:38:02.627: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.894782698s Mar 16 13:38:04.630: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c": Phase="Running", Reason="", readiness=true. Elapsed: 6.898257042s Mar 16 13:38:06.674: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.942536397s Mar 16 13:38:06.675: INFO: Pod "busybox-user-65534-8166a521-40fc-4368-8e08-9ca843f2fb3c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:38:06.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6127" for this suite. • [SLOW TEST:10.324 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2066,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:38:06.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:38:07.018: INFO: Creating deployment "test-recreate-deployment" Mar 16 13:38:07.052: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 16 13:38:07.208: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 16 13:38:09.215: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 16 13:38:09.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:38:11.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962687, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:38:13.441: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 16 13:38:13.497: INFO: Updating deployment test-recreate-deployment Mar 16 13:38:13.497: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 16 13:38:16.975: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-7366 /apis/apps/v1/namespaces/deployment-7366/deployments/test-recreate-deployment b25945d0-4726-45c1-9c32-4125f83ccdbc 278509 2 2020-03-16 13:38:07 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cde758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-16 13:38:16 +0000 UTC,LastTransitionTime:2020-03-16 13:38:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-16 13:38:16 +0000 UTC,LastTransitionTime:2020-03-16 13:38:07 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 16 13:38:17.444: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-7366 /apis/apps/v1/namespaces/deployment-7366/replicasets/test-recreate-deployment-5f94c574ff 2b64bb12-a5ff-424b-9290-902d3ccb4455 278505 1 2020-03-16 13:38:14 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b25945d0-4726-45c1-9c32-4125f83ccdbc 0xc0027c6bb7 0xc0027c6bb8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027c6c18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:38:17.444: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 16 13:38:17.444: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-7366 /apis/apps/v1/namespaces/deployment-7366/replicasets/test-recreate-deployment-846c7dd955 73f8cd98-c5eb-41fe-96c8-0bb4d866fe35 278494 2 2020-03-16 13:38:07 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b25945d0-4726-45c1-9c32-4125f83ccdbc 0xc0027c6c87 0xc0027c6c88}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027c6cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:38:17.633: INFO: Pod "test-recreate-deployment-5f94c574ff-j64qm" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-j64qm test-recreate-deployment-5f94c574ff- deployment-7366 /api/v1/namespaces/deployment-7366/pods/test-recreate-deployment-5f94c574ff-j64qm fd36fad3-0db9-4f47-bd71-5e4c205f7319 278512 0 2020-03-16 13:38:15 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 2b64bb12-a5ff-424b-9290-902d3ccb4455 0xc0027c7477 0xc0027c7478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kgkdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kgkdr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kgkdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:38:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:38:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:38:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:38:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-03-16 13:38:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:38:17.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7366" for this suite. • [SLOW TEST:12.378 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":129,"skipped":2071,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:38:19.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 16 13:38:21.655: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3625 /api/v1/namespaces/watch-3625/configmaps/e2e-watch-test-resource-version 5c72f09e-b6ca-4096-9163-ef5a24a7ad73 278531 0 2020-03-16 13:38:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:38:21.655: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3625 /api/v1/namespaces/watch-3625/configmaps/e2e-watch-test-resource-version 5c72f09e-b6ca-4096-9163-ef5a24a7ad73 278532 0 2020-03-16 13:38:20 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:38:21.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3625" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":130,"skipped":2080,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:38:21.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:38:24.567: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:27.291: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:28.617: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:30.699: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:32.759: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:34.645: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:38:36.598: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:38.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:40.570: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:42.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:44.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:46.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:48.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:50.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = false) Mar 16 13:38:52.571: INFO: The status of Pod test-webserver-8c763684-dbbb-40f6-b02d-e36af9f3bb83 is Running (Ready = true) Mar 16 13:38:52.574: INFO: Container started at 2020-03-16 13:38:33 +0000 UTC, pod became ready at 2020-03-16 13:38:51 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:38:52.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3588" for this suite. • [SLOW TEST:30.863 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2097,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:38:52.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6686.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6686.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6686.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:39:04.146: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.392: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.533: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.542: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.544: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.547: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.549: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:04.553: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:09.558: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.562: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.565: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.688: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.690: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.693: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.696: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:09.700: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:14.558: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.561: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.566: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.574: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.623: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.688: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.691: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:14.697: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:19.558: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.561: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.565: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.568: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.578: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.584: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.586: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.589: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:19.593: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:24.558: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.561: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.564: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.566: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.573: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.575: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.578: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.580: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:24.584: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:29.725: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.728: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.731: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.735: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.793: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.796: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.799: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.802: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local from pod dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408: the server could not find the requested resource (get pods dns-test-f67b3666-b265-4e6f-8258-155261bcd408) Mar 16 13:39:29.808: INFO: Lookups using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6686.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6686.svc.cluster.local jessie_udp@dns-test-service-2.dns-6686.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6686.svc.cluster.local] Mar 16 13:39:34.588: INFO: DNS probes using dns-6686/dns-test-f67b3666-b265-4e6f-8258-155261bcd408 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:39:36.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6686" for this suite. • [SLOW TEST:43.984 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":132,"skipped":2099,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:39:36.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-e5188053-4777-4a62-9562-830cd1dbd70e STEP: Creating a pod to test consume configMaps Mar 16 13:39:37.139: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9" in namespace "projected-8333" to be "Succeeded or Failed" Mar 16 13:39:37.150: INFO: Pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.419122ms Mar 16 13:39:39.209: INFO: Pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069616855s Mar 16 13:39:41.240: INFO: Pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9": Phase="Running", Reason="", readiness=true. Elapsed: 4.100697851s Mar 16 13:39:43.243: INFO: Pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10407773s STEP: Saw pod success Mar 16 13:39:43.243: INFO: Pod "pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9" satisfied condition "Succeeded or Failed" Mar 16 13:39:43.246: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:39:43.503: INFO: Waiting for pod pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9 to disappear Mar 16 13:39:43.506: INFO: Pod pod-projected-configmaps-b990cb2f-a5e8-46fd-af8f-42186f965de9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:39:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8333" for this suite. • [SLOW TEST:6.968 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2114,"failed":0} [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:39:43.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0316 13:40:24.613856 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:40:24.613: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:40:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2706" for this suite. • [SLOW TEST:41.090 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":134,"skipped":2114,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:40:24.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components Mar 16 13:40:25.272: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 16 13:40:25.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:32.450: INFO: stderr: "" Mar 16 13:40:32.450: INFO: stdout: "service/agnhost-slave created\n" Mar 16 13:40:32.450: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 16 13:40:32.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:32.868: INFO: stderr: "" Mar 16 13:40:32.868: INFO: stdout: "service/agnhost-master created\n" Mar 16 13:40:32.868: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 16 13:40:32.868: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:33.228: INFO: stderr: "" Mar 16 13:40:33.228: INFO: stdout: "service/frontend created\n" Mar 16 13:40:33.228: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 16 13:40:33.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:33.894: INFO: stderr: "" Mar 16 13:40:33.894: INFO: stdout: "deployment.apps/frontend created\n" Mar 16 13:40:33.894: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 16 13:40:33.894: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:34.433: INFO: stderr: "" Mar 16 13:40:34.433: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 16 13:40:34.434: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 16 13:40:34.434: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3118' Mar 16 13:40:35.403: INFO: stderr: "" Mar 16 13:40:35.403: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 16 13:40:35.403: INFO: Waiting for all frontend pods to be Running. Mar 16 13:40:50.454: INFO: Waiting for frontend to serve content. Mar 16 13:40:50.727: INFO: Trying to add a new entry to the guestbook. Mar 16 13:40:50.799: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Mar 16 13:40:50.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:51.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:51.127: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:40:51.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:51.327: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:51.327: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:40:51.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:51.574: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:51.574: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:40:51.574: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:51.675: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:51.675: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:40:51.675: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:51.874: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:51.874: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 16 13:40:51.875: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3118' Mar 16 13:40:52.113: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:40:52.114: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:40:52.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3118" for this suite. • [SLOW TEST:28.367 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":135,"skipped":2117,"failed":0} [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:40:52.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-8821 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 16 13:40:53.847: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Mar 16 13:40:54.290: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:40:56.294: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:40:58.402: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Mar 16 13:41:00.588: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:02.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:04.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:06.312: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:08.420: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:10.295: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:12.324: INFO: The status of Pod netserver-0 is Running (Ready = false) Mar 16 13:41:14.408: INFO: The status of Pod netserver-0 is Running (Ready = true) Mar 16 13:41:14.413: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Mar 16 13:41:20.484: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.128:8080/dial?request=hostname&protocol=udp&host=10.244.2.127&port=8081&tries=1'] Namespace:pod-network-test-8821 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:41:20.484: INFO: >>> kubeConfig: /root/.kube/config I0316 13:41:20.523138 7 log.go:172] (0xc0043b24d0) (0xc0021f94a0) Create stream I0316 13:41:20.523169 7 log.go:172] (0xc0043b24d0) (0xc0021f94a0) Stream added, broadcasting: 1 I0316 13:41:20.525856 7 log.go:172] (0xc0043b24d0) Reply frame received for 1 I0316 13:41:20.525909 7 log.go:172] (0xc0043b24d0) (0xc001df4500) Create stream I0316 13:41:20.525929 7 log.go:172] (0xc0043b24d0) (0xc001df4500) Stream added, broadcasting: 3 I0316 13:41:20.526976 7 log.go:172] (0xc0043b24d0) Reply frame received for 3 I0316 13:41:20.527021 7 log.go:172] (0xc0043b24d0) (0xc001df45a0) Create stream I0316 13:41:20.527038 7 log.go:172] (0xc0043b24d0) (0xc001df45a0) Stream added, broadcasting: 5 I0316 13:41:20.528718 7 log.go:172] (0xc0043b24d0) Reply frame received for 5 I0316 13:41:20.618156 7 log.go:172] (0xc0043b24d0) Data frame received for 3 I0316 13:41:20.618207 7 log.go:172] (0xc001df4500) (3) Data frame handling I0316 13:41:20.618240 7 log.go:172] (0xc001df4500) (3) Data frame sent I0316 13:41:20.618624 7 log.go:172] (0xc0043b24d0) Data frame received for 3 I0316 13:41:20.618680 7 log.go:172] (0xc001df4500) (3) Data frame handling I0316 13:41:20.618771 7 log.go:172] (0xc0043b24d0) Data frame received for 5 I0316 13:41:20.618795 7 log.go:172] (0xc001df45a0) (5) Data frame handling I0316 13:41:20.620647 7 log.go:172] (0xc0043b24d0) Data frame received for 1 I0316 13:41:20.620662 7 log.go:172] (0xc0021f94a0) (1) Data frame handling I0316 13:41:20.620669 7 log.go:172] (0xc0021f94a0) (1) Data frame sent I0316 13:41:20.620689 7 log.go:172] (0xc0043b24d0) (0xc0021f94a0) Stream removed, broadcasting: 1 I0316 13:41:20.620739 7 log.go:172] (0xc0043b24d0) Go away received I0316 13:41:20.620768 7 log.go:172] (0xc0043b24d0) (0xc0021f94a0) Stream removed, broadcasting: 1 I0316 13:41:20.620782 7 log.go:172] (0xc0043b24d0) (0xc001df4500) Stream removed, broadcasting: 3 I0316 13:41:20.620797 7 log.go:172] (0xc0043b24d0) (0xc001df45a0) Stream removed, broadcasting: 5 Mar 16 13:41:20.620: INFO: Waiting for responses: map[] Mar 16 13:41:20.624: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.128:8080/dial?request=hostname&protocol=udp&host=10.244.1.54&port=8081&tries=1'] Namespace:pod-network-test-8821 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:41:20.624: INFO: >>> kubeConfig: /root/.kube/config I0316 13:41:20.655939 7 log.go:172] (0xc002e3a630) (0xc001df4d20) Create stream I0316 13:41:20.655960 7 log.go:172] (0xc002e3a630) (0xc001df4d20) Stream added, broadcasting: 1 I0316 13:41:20.657824 7 log.go:172] (0xc002e3a630) Reply frame received for 1 I0316 13:41:20.657864 7 log.go:172] (0xc002e3a630) (0xc001f2b220) Create stream I0316 13:41:20.657878 7 log.go:172] (0xc002e3a630) (0xc001f2b220) Stream added, broadcasting: 3 I0316 13:41:20.658974 7 log.go:172] (0xc002e3a630) Reply frame received for 3 I0316 13:41:20.659035 7 log.go:172] (0xc002e3a630) (0xc001df4dc0) Create stream I0316 13:41:20.659051 7 log.go:172] (0xc002e3a630) (0xc001df4dc0) Stream added, broadcasting: 5 I0316 13:41:20.660101 7 log.go:172] (0xc002e3a630) Reply frame received for 5 I0316 13:41:20.728238 7 log.go:172] (0xc002e3a630) Data frame received for 3 I0316 13:41:20.728259 7 log.go:172] (0xc001f2b220) (3) Data frame handling I0316 13:41:20.728273 7 log.go:172] (0xc001f2b220) (3) Data frame sent I0316 13:41:20.728677 7 log.go:172] (0xc002e3a630) Data frame received for 3 I0316 13:41:20.728693 7 log.go:172] (0xc001f2b220) (3) Data frame handling I0316 13:41:20.728788 7 log.go:172] (0xc002e3a630) Data frame received for 5 I0316 13:41:20.728806 7 log.go:172] (0xc001df4dc0) (5) Data frame handling I0316 13:41:20.730614 7 log.go:172] (0xc002e3a630) Data frame received for 1 I0316 13:41:20.730629 7 log.go:172] (0xc001df4d20) (1) Data frame handling I0316 13:41:20.730645 7 log.go:172] (0xc001df4d20) (1) Data frame sent I0316 13:41:20.730662 7 log.go:172] (0xc002e3a630) (0xc001df4d20) Stream removed, broadcasting: 1 I0316 13:41:20.730751 7 log.go:172] (0xc002e3a630) (0xc001df4d20) Stream removed, broadcasting: 1 I0316 13:41:20.730773 7 log.go:172] (0xc002e3a630) (0xc001f2b220) Stream removed, broadcasting: 3 I0316 13:41:20.730802 7 log.go:172] (0xc002e3a630) Go away received I0316 13:41:20.730849 7 log.go:172] (0xc002e3a630) (0xc001df4dc0) Stream removed, broadcasting: 5 Mar 16 13:41:20.730: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:20.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8821" for this suite. • [SLOW TEST:27.748 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:20.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-b8adcd9e-8db1-45b2-ba71-2b38727db12b STEP: Creating a pod to test consume secrets Mar 16 13:41:21.009: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3" in namespace "projected-8044" to be "Succeeded or Failed" Mar 16 13:41:21.022: INFO: Pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.076369ms Mar 16 13:41:23.026: INFO: Pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017321518s Mar 16 13:41:25.031: INFO: Pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3": Phase="Running", Reason="", readiness=true. Elapsed: 4.021495491s Mar 16 13:41:27.034: INFO: Pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025411416s STEP: Saw pod success Mar 16 13:41:27.035: INFO: Pod "pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3" satisfied condition "Succeeded or Failed" Mar 16 13:41:27.038: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3 container projected-secret-volume-test: STEP: delete the pod Mar 16 13:41:27.078: INFO: Waiting for pod pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3 to disappear Mar 16 13:41:27.282: INFO: Pod pod-projected-secrets-dd4793ad-a682-44c4-b300-9e37e013a9a3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:27.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8044" for this suite. • [SLOW TEST:6.553 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2148,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:27.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 16 13:41:28.722: INFO: Waiting up to 5m0s for pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2" in namespace "emptydir-9305" to be "Succeeded or Failed" Mar 16 13:41:28.945: INFO: Pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 223.315669ms Mar 16 13:41:31.121: INFO: Pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.399041161s Mar 16 13:41:33.165: INFO: Pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.443143795s Mar 16 13:41:35.168: INFO: Pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.446439069s STEP: Saw pod success Mar 16 13:41:35.168: INFO: Pod "pod-7921a146-fbb9-446b-9080-aad0cdda06d2" satisfied condition "Succeeded or Failed" Mar 16 13:41:35.170: INFO: Trying to get logs from node latest-worker2 pod pod-7921a146-fbb9-446b-9080-aad0cdda06d2 container test-container: STEP: delete the pod Mar 16 13:41:35.196: INFO: Waiting for pod pod-7921a146-fbb9-446b-9080-aad0cdda06d2 to disappear Mar 16 13:41:35.218: INFO: Pod pod-7921a146-fbb9-446b-9080-aad0cdda06d2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:35.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9305" for this suite. • [SLOW TEST:7.932 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":138,"skipped":2161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:35.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:41:35.456: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 16 13:41:35.499: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 16 13:41:40.503: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 13:41:40.503: INFO: Creating deployment "test-rolling-update-deployment" Mar 16 13:41:40.536: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 16 13:41:40.649: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 16 13:41:43.050: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 16 13:41:43.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962900, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962900, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962900, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719962900, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:41:45.059: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 16 13:41:45.067: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-5779 /apis/apps/v1/namespaces/deployment-5779/deployments/test-rolling-update-deployment b89b75ef-9571-4283-874c-3595482312b1 279742 1 2020-03-16 13:41:40 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a32808 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-16 13:41:40 +0000 UTC,LastTransitionTime:2020-03-16 13:41:40 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-03-16 13:41:44 +0000 UTC,LastTransitionTime:2020-03-16 13:41:40 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 16 13:41:45.070: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-5779 /apis/apps/v1/namespaces/deployment-5779/replicasets/test-rolling-update-deployment-664dd8fc7f e8deef1f-f1cf-4214-888d-e12775d540e5 279731 1 2020-03-16 13:41:40 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b89b75ef-9571-4283-874c-3595482312b1 0xc002a33087 0xc002a33088}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a330f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:41:45.070: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 16 13:41:45.070: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-5779 /apis/apps/v1/namespaces/deployment-5779/replicasets/test-rolling-update-controller fb8d95b5-e08e-4465-b747-982743fdd32e 279740 2 2020-03-16 13:41:35 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b89b75ef-9571-4283-874c-3595482312b1 0xc002a32fb7 0xc002a32fb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002a33018 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:41:45.074: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-bxd54" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-bxd54 test-rolling-update-deployment-664dd8fc7f- deployment-5779 /api/v1/namespaces/deployment-5779/pods/test-rolling-update-deployment-664dd8fc7f-bxd54 0238b3fd-da63-46dc-916e-f92a83e4523c 279730 0 2020-03-16 13:41:40 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f e8deef1f-f1cf-4214-888d-e12775d540e5 0xc003a640a7 0xc003a640a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dlb8b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dlb8b,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dlb8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:41:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:41:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.58,StartTime:2020-03-16 13:41:40 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:41:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://55bba797c8b30124d8d567a762af51477ceb971fe5910d848e457983ce729993,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:45.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5779" for this suite. • [SLOW TEST:9.857 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":139,"skipped":2186,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:45.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:52.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2095" for this suite. • [SLOW TEST:7.102 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":140,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:52.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-c8fa87cb-0587-47aa-a6ff-62bd842bbd7a STEP: Creating a pod to test consume configMaps Mar 16 13:41:52.296: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433" in namespace "configmap-467" to be "Succeeded or Failed" Mar 16 13:41:52.299: INFO: Pod "pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882509ms Mar 16 13:41:54.303: INFO: Pod "pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006915641s Mar 16 13:41:56.306: INFO: Pod "pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010335509s STEP: Saw pod success Mar 16 13:41:56.306: INFO: Pod "pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433" satisfied condition "Succeeded or Failed" Mar 16 13:41:56.309: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433 container configmap-volume-test: STEP: delete the pod Mar 16 13:41:56.328: INFO: Waiting for pod pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433 to disappear Mar 16 13:41:56.344: INFO: Pod pod-configmaps-1e3026ca-88c4-4616-bcae-31608f4ca433 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:41:56.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-467" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2248,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:41:56.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Mar 16 13:41:56.430: INFO: Waiting up to 5m0s for pod "var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016" in namespace "var-expansion-1390" to be "Succeeded or Failed" Mar 16 13:41:56.449: INFO: Pod "var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016": Phase="Pending", Reason="", readiness=false. Elapsed: 18.740082ms Mar 16 13:41:58.453: INFO: Pod "var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022723022s Mar 16 13:42:00.456: INFO: Pod "var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026622222s STEP: Saw pod success Mar 16 13:42:00.457: INFO: Pod "var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016" satisfied condition "Succeeded or Failed" Mar 16 13:42:00.459: INFO: Trying to get logs from node latest-worker2 pod var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016 container dapi-container: STEP: delete the pod Mar 16 13:42:00.491: INFO: Waiting for pod var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016 to disappear Mar 16 13:42:00.522: INFO: Pod var-expansion-247cc3a3-f2a5-49d3-b0ed-9b5403827016 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:42:00.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1390" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2259,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:42:00.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6b43679d-9496-4171-901f-a01e1e90c5bf STEP: Creating a pod to test consume configMaps Mar 16 13:42:00.586: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4" in namespace "projected-8710" to be "Succeeded or Failed" Mar 16 13:42:00.597: INFO: Pod "pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4": Phase="Pending", Reason="", readiness=false. Elapsed: 11.242476ms Mar 16 13:42:02.601: INFO: Pod "pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015061095s Mar 16 13:42:04.606: INFO: Pod "pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019548266s STEP: Saw pod success Mar 16 13:42:04.606: INFO: Pod "pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4" satisfied condition "Succeeded or Failed" Mar 16 13:42:04.608: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:42:04.640: INFO: Waiting for pod pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4 to disappear Mar 16 13:42:04.652: INFO: Pod pod-projected-configmaps-6f5c534d-39bf-4f1d-9b31-1a08676b33b4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:42:04.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8710" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2265,"failed":0} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:42:04.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 16 13:42:04.756: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:42:19.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-431" for this suite. • [SLOW TEST:15.303 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":144,"skipped":2265,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:42:19.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-2888805a-856b-41dc-8f73-4155bb9924b3 STEP: Creating a pod to test consume secrets Mar 16 13:42:20.321: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf" in namespace "projected-3263" to be "Succeeded or Failed" Mar 16 13:42:20.363: INFO: Pod "pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf": Phase="Pending", Reason="", readiness=false. Elapsed: 41.748421ms Mar 16 13:42:22.375: INFO: Pod "pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053318421s Mar 16 13:42:24.379: INFO: Pod "pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057581187s STEP: Saw pod success Mar 16 13:42:24.379: INFO: Pod "pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf" satisfied condition "Succeeded or Failed" Mar 16 13:42:24.382: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf container secret-volume-test: STEP: delete the pod Mar 16 13:42:24.412: INFO: Waiting for pod pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf to disappear Mar 16 13:42:24.437: INFO: Pod pod-projected-secrets-5631f21e-a671-4ca9-8a76-f0cbb1a371bf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:42:24.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3263" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2268,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:42:24.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 16 13:42:24.533: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2299 /api/v1/namespaces/watch-2299/configmaps/e2e-watch-test-watch-closed 88500606-7215-4f48-ada0-fc0abf0b492a 280011 0 2020-03-16 13:42:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:42:24.533: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2299 /api/v1/namespaces/watch-2299/configmaps/e2e-watch-test-watch-closed 88500606-7215-4f48-ada0-fc0abf0b492a 280012 0 2020-03-16 13:42:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 16 13:42:24.544: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2299 /api/v1/namespaces/watch-2299/configmaps/e2e-watch-test-watch-closed 88500606-7215-4f48-ada0-fc0abf0b492a 280013 0 2020-03-16 13:42:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:42:24.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2299 /api/v1/namespaces/watch-2299/configmaps/e2e-watch-test-watch-closed 88500606-7215-4f48-ada0-fc0abf0b492a 280014 0 2020-03-16 13:42:24 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:42:24.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2299" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":146,"skipped":2272,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:42:24.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-91d9ef75-1f44-4f80-8c56-d76430b66375 STEP: Creating configMap with name cm-test-opt-upd-97abe71f-171f-4475-a759-23d9ebfcfd79 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-91d9ef75-1f44-4f80-8c56-d76430b66375 STEP: Updating configmap cm-test-opt-upd-97abe71f-171f-4475-a759-23d9ebfcfd79 STEP: Creating configMap with name cm-test-opt-create-ba768c34-5b18-44e4-9a67-1580a9f709d5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:43:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2256" for this suite. • [SLOW TEST:74.953 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":147,"skipped":2280,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:43:39.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:43:49.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4568" for this suite. STEP: Destroying namespace "nsdeletetest-9642" for this suite. Mar 16 13:43:49.966: INFO: Namespace nsdeletetest-9642 was already deleted STEP: Destroying namespace "nsdeletetest-3868" for this suite. • [SLOW TEST:10.462 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":148,"skipped":2281,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:43:49.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-4d4w STEP: Creating a pod to test atomic-volume-subpath Mar 16 13:43:50.106: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4d4w" in namespace "subpath-2714" to be "Succeeded or Failed" Mar 16 13:43:50.110: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.094012ms Mar 16 13:43:52.219: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112856245s Mar 16 13:43:54.223: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 4.116964463s Mar 16 13:43:56.227: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 6.120866783s Mar 16 13:43:58.231: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 8.125235737s Mar 16 13:44:00.237: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 10.13064587s Mar 16 13:44:02.241: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 12.135013109s Mar 16 13:44:04.245: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 14.138893203s Mar 16 13:44:06.248: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 16.141921726s Mar 16 13:44:08.252: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 18.146273618s Mar 16 13:44:10.257: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 20.150296331s Mar 16 13:44:12.261: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Running", Reason="", readiness=true. Elapsed: 22.15468116s Mar 16 13:44:14.265: INFO: Pod "pod-subpath-test-secret-4d4w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.159064488s STEP: Saw pod success Mar 16 13:44:14.265: INFO: Pod "pod-subpath-test-secret-4d4w" satisfied condition "Succeeded or Failed" Mar 16 13:44:14.268: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-secret-4d4w container test-container-subpath-secret-4d4w: STEP: delete the pod Mar 16 13:44:14.304: INFO: Waiting for pod pod-subpath-test-secret-4d4w to disappear Mar 16 13:44:14.356: INFO: Pod pod-subpath-test-secret-4d4w no longer exists STEP: Deleting pod pod-subpath-test-secret-4d4w Mar 16 13:44:14.356: INFO: Deleting pod "pod-subpath-test-secret-4d4w" in namespace "subpath-2714" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:44:14.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2714" for this suite. • [SLOW TEST:24.397 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":149,"skipped":2284,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:44:14.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3603 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3603 STEP: creating replication controller externalsvc in namespace services-3603 I0316 13:44:14.566411 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3603, replica count: 2 I0316 13:44:17.616976 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:44:20.617359 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 16 13:44:20.665: INFO: Creating new exec pod Mar 16 13:44:24.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3603 execpodtzfcg -- /bin/sh -x -c nslookup nodeport-service' Mar 16 13:44:24.895: INFO: stderr: "I0316 13:44:24.827336 1081 log.go:172] (0xc0000e9e40) (0xc000920000) Create stream\nI0316 13:44:24.827396 1081 log.go:172] (0xc0000e9e40) (0xc000920000) Stream added, broadcasting: 1\nI0316 13:44:24.834672 1081 log.go:172] (0xc0000e9e40) Reply frame received for 1\nI0316 13:44:24.834734 1081 log.go:172] (0xc0000e9e40) (0xc000a94000) Create stream\nI0316 13:44:24.834748 1081 log.go:172] (0xc0000e9e40) (0xc000a94000) Stream added, broadcasting: 3\nI0316 13:44:24.836031 1081 log.go:172] (0xc0000e9e40) Reply frame received for 3\nI0316 13:44:24.836065 1081 log.go:172] (0xc0000e9e40) (0xc0009200a0) Create stream\nI0316 13:44:24.836075 1081 log.go:172] (0xc0000e9e40) (0xc0009200a0) Stream added, broadcasting: 5\nI0316 13:44:24.837569 1081 log.go:172] (0xc0000e9e40) Reply frame received for 5\nI0316 13:44:24.883091 1081 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0316 13:44:24.883111 1081 log.go:172] (0xc0009200a0) (5) Data frame handling\nI0316 13:44:24.883122 1081 log.go:172] (0xc0009200a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0316 13:44:24.889245 1081 log.go:172] (0xc0000e9e40) Data frame received for 3\nI0316 13:44:24.889278 1081 log.go:172] (0xc000a94000) (3) Data frame handling\nI0316 13:44:24.889306 1081 log.go:172] (0xc000a94000) (3) Data frame sent\nI0316 13:44:24.890135 1081 log.go:172] (0xc0000e9e40) Data frame received for 3\nI0316 13:44:24.890152 1081 log.go:172] (0xc000a94000) (3) Data frame handling\nI0316 13:44:24.890173 1081 log.go:172] (0xc000a94000) (3) Data frame sent\nI0316 13:44:24.890432 1081 log.go:172] (0xc0000e9e40) Data frame received for 5\nI0316 13:44:24.890448 1081 log.go:172] (0xc0009200a0) (5) Data frame handling\nI0316 13:44:24.890596 1081 log.go:172] (0xc0000e9e40) Data frame received for 3\nI0316 13:44:24.890613 1081 log.go:172] (0xc000a94000) (3) Data frame handling\nI0316 13:44:24.892047 1081 log.go:172] (0xc0000e9e40) Data frame received for 1\nI0316 13:44:24.892073 1081 log.go:172] (0xc000920000) (1) Data frame handling\nI0316 13:44:24.892098 1081 log.go:172] (0xc000920000) (1) Data frame sent\nI0316 13:44:24.892123 1081 log.go:172] (0xc0000e9e40) (0xc000920000) Stream removed, broadcasting: 1\nI0316 13:44:24.892147 1081 log.go:172] (0xc0000e9e40) Go away received\nI0316 13:44:24.892404 1081 log.go:172] (0xc0000e9e40) (0xc000920000) Stream removed, broadcasting: 1\nI0316 13:44:24.892417 1081 log.go:172] (0xc0000e9e40) (0xc000a94000) Stream removed, broadcasting: 3\nI0316 13:44:24.892424 1081 log.go:172] (0xc0000e9e40) (0xc0009200a0) Stream removed, broadcasting: 5\n" Mar 16 13:44:24.895: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3603.svc.cluster.local\tcanonical name = externalsvc.services-3603.svc.cluster.local.\nName:\texternalsvc.services-3603.svc.cluster.local\nAddress: 10.96.122.116\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3603, will wait for the garbage collector to delete the pods Mar 16 13:44:24.971: INFO: Deleting ReplicationController externalsvc took: 7.187322ms Mar 16 13:44:25.271: INFO: Terminating ReplicationController externalsvc pods took: 300.223225ms Mar 16 13:44:32.815: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:44:32.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3603" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:18.479 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":150,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:44:32.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 16 13:44:32.967: INFO: Waiting up to 5m0s for pod "pod-3dc246fd-974e-4329-bfb2-e6e940f5b541" in namespace "emptydir-1462" to be "Succeeded or Failed" Mar 16 13:44:32.994: INFO: Pod "pod-3dc246fd-974e-4329-bfb2-e6e940f5b541": Phase="Pending", Reason="", readiness=false. Elapsed: 26.255257ms Mar 16 13:44:35.010: INFO: Pod "pod-3dc246fd-974e-4329-bfb2-e6e940f5b541": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04273629s Mar 16 13:44:37.015: INFO: Pod "pod-3dc246fd-974e-4329-bfb2-e6e940f5b541": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047271004s STEP: Saw pod success Mar 16 13:44:37.015: INFO: Pod "pod-3dc246fd-974e-4329-bfb2-e6e940f5b541" satisfied condition "Succeeded or Failed" Mar 16 13:44:37.018: INFO: Trying to get logs from node latest-worker pod pod-3dc246fd-974e-4329-bfb2-e6e940f5b541 container test-container: STEP: delete the pod Mar 16 13:44:37.061: INFO: Waiting for pod pod-3dc246fd-974e-4329-bfb2-e6e940f5b541 to disappear Mar 16 13:44:37.093: INFO: Pod pod-3dc246fd-974e-4329-bfb2-e6e940f5b541 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:44:37.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1462" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":151,"skipped":2311,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:44:37.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 16 13:44:37.677: INFO: Pod name wrapped-volume-race-c72b7e98-0ed5-439d-8625-196b1364cad6: Found 0 pods out of 5 Mar 16 13:44:42.685: INFO: Pod name wrapped-volume-race-c72b7e98-0ed5-439d-8625-196b1364cad6: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-c72b7e98-0ed5-439d-8625-196b1364cad6 in namespace emptydir-wrapper-4951, will wait for the garbage collector to delete the pods Mar 16 13:44:56.772: INFO: Deleting ReplicationController wrapped-volume-race-c72b7e98-0ed5-439d-8625-196b1364cad6 took: 9.021034ms Mar 16 13:44:57.073: INFO: Terminating ReplicationController wrapped-volume-race-c72b7e98-0ed5-439d-8625-196b1364cad6 pods took: 300.294764ms STEP: Creating RC which spawns configmap-volume pods Mar 16 13:45:13.305: INFO: Pod name wrapped-volume-race-9f9b72e8-76e2-4c4a-a8f8-6dcd909c7d9d: Found 0 pods out of 5 Mar 16 13:45:18.505: INFO: Pod name wrapped-volume-race-9f9b72e8-76e2-4c4a-a8f8-6dcd909c7d9d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9f9b72e8-76e2-4c4a-a8f8-6dcd909c7d9d in namespace emptydir-wrapper-4951, will wait for the garbage collector to delete the pods Mar 16 13:45:30.596: INFO: Deleting ReplicationController wrapped-volume-race-9f9b72e8-76e2-4c4a-a8f8-6dcd909c7d9d took: 15.382006ms Mar 16 13:45:30.896: INFO: Terminating ReplicationController wrapped-volume-race-9f9b72e8-76e2-4c4a-a8f8-6dcd909c7d9d pods took: 300.262976ms STEP: Creating RC which spawns configmap-volume pods Mar 16 13:45:43.226: INFO: Pod name wrapped-volume-race-a502d746-7a71-413a-8eee-0d5814d178a5: Found 0 pods out of 5 Mar 16 13:45:48.232: INFO: Pod name wrapped-volume-race-a502d746-7a71-413a-8eee-0d5814d178a5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a502d746-7a71-413a-8eee-0d5814d178a5 in namespace emptydir-wrapper-4951, will wait for the garbage collector to delete the pods Mar 16 13:46:02.324: INFO: Deleting ReplicationController wrapped-volume-race-a502d746-7a71-413a-8eee-0d5814d178a5 took: 15.830925ms Mar 16 13:46:02.625: INFO: Terminating ReplicationController wrapped-volume-race-a502d746-7a71-413a-8eee-0d5814d178a5 pods took: 300.311651ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:14.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4951" for this suite. • [SLOW TEST:97.505 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":152,"skipped":2320,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:14.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:26.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1338" for this suite. • [SLOW TEST:11.571 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":153,"skipped":2328,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:26.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 16 13:46:26.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9590' Mar 16 13:46:26.746: INFO: stderr: "" Mar 16 13:46:26.746: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 13:46:26.746: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9590' Mar 16 13:46:26.890: INFO: stderr: "" Mar 16 13:46:26.890: INFO: stdout: "update-demo-nautilus-qbg8x update-demo-nautilus-w8kdc " Mar 16 13:46:26.890: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbg8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9590' Mar 16 13:46:26.996: INFO: stderr: "" Mar 16 13:46:26.996: INFO: stdout: "" Mar 16 13:46:26.996: INFO: update-demo-nautilus-qbg8x is created but not running Mar 16 13:46:31.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9590' Mar 16 13:46:32.581: INFO: stderr: "" Mar 16 13:46:32.581: INFO: stdout: "update-demo-nautilus-qbg8x update-demo-nautilus-w8kdc " Mar 16 13:46:32.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbg8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9590' Mar 16 13:46:32.772: INFO: stderr: "" Mar 16 13:46:32.773: INFO: stdout: "true" Mar 16 13:46:32.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbg8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9590' Mar 16 13:46:32.858: INFO: stderr: "" Mar 16 13:46:32.858: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:46:32.858: INFO: validating pod update-demo-nautilus-qbg8x Mar 16 13:46:32.862: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:46:32.862: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:46:32.862: INFO: update-demo-nautilus-qbg8x is verified up and running Mar 16 13:46:32.862: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8kdc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9590' Mar 16 13:46:32.966: INFO: stderr: "" Mar 16 13:46:32.966: INFO: stdout: "true" Mar 16 13:46:32.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w8kdc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9590' Mar 16 13:46:33.050: INFO: stderr: "" Mar 16 13:46:33.050: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 13:46:33.050: INFO: validating pod update-demo-nautilus-w8kdc Mar 16 13:46:33.054: INFO: got data: { "image": "nautilus.jpg" } Mar 16 13:46:33.054: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 13:46:33.054: INFO: update-demo-nautilus-w8kdc is verified up and running STEP: using delete to clean up resources Mar 16 13:46:33.054: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9590' Mar 16 13:46:33.162: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:46:33.162: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 13:46:33.162: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9590' Mar 16 13:46:33.245: INFO: stderr: "No resources found in kubectl-9590 namespace.\n" Mar 16 13:46:33.245: INFO: stdout: "" Mar 16 13:46:33.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9590 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:46:33.331: INFO: stderr: "" Mar 16 13:46:33.331: INFO: stdout: "update-demo-nautilus-qbg8x\nupdate-demo-nautilus-w8kdc\n" Mar 16 13:46:33.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9590' Mar 16 13:46:33.929: INFO: stderr: "No resources found in kubectl-9590 namespace.\n" Mar 16 13:46:33.929: INFO: stdout: "" Mar 16 13:46:33.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9590 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:46:34.021: INFO: stderr: "" Mar 16 13:46:34.021: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:34.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9590" for this suite. • [SLOW TEST:7.850 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":154,"skipped":2341,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:34.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Mar 16 13:46:34.190: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix673119563/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:34.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5224" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":275,"completed":155,"skipped":2359,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:34.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:46:36.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:46:38.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963196, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963196, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963197, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963196, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:46:41.927: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:42.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3258" for this suite. STEP: Destroying namespace "webhook-3258-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.847 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":156,"skipped":2359,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:42.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0316 13:46:43.383492 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 13:46:43.383: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:43.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3639" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":157,"skipped":2366,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:43.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:46:43.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' Mar 16 13:46:44.590: INFO: stderr: "" Mar 16 13:46:44.590: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 16 13:46:44.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1148' Mar 16 13:46:45.595: INFO: stderr: "" Mar 16 13:46:45.595: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 13:46:46.671: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:46:46.671: INFO: Found 0 / 1 Mar 16 13:46:47.659: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:46:47.659: INFO: Found 0 / 1 Mar 16 13:46:48.605: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:46:48.605: INFO: Found 0 / 1 Mar 16 13:46:49.599: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:46:49.599: INFO: Found 1 / 1 Mar 16 13:46:49.599: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 13:46:49.603: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 13:46:49.603: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 13:46:49.603: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-4xnpl --namespace=kubectl-1148' Mar 16 13:46:49.704: INFO: stderr: "" Mar 16 13:46:49.704: INFO: stdout: "Name: agnhost-master-4xnpl\nNamespace: kubectl-1148\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Mon, 16 Mar 2020 13:46:45 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.67\nIPs:\n IP: 10.244.1.67\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://c7a61fe860a5206f9b3b3b98ecdcca1e48b1f05f4333ed32d8de68e5e589ab7b\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 16 Mar 2020 13:46:47 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bndz8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bndz8:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bndz8\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-1148/agnhost-master-4xnpl to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 2s kubelet, latest-worker2 Started container agnhost-master\n" Mar 16 13:46:49.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1148' Mar 16 13:46:49.827: INFO: stderr: "" Mar 16 13:46:49.827: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1148\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-4xnpl\n" Mar 16 13:46:49.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1148' Mar 16 13:46:49.927: INFO: stderr: "" Mar 16 13:46:49.927: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1148\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.181.30\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.67:6379\nSession Affinity: None\nEvents: \n" Mar 16 13:46:49.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Mar 16 13:46:50.082: INFO: stderr: "" Mar 16 13:46:50.082: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Mon, 16 Mar 2020 13:46:49 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 16 Mar 2020 13:43:37 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 16 Mar 2020 13:43:37 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 16 Mar 2020 13:43:37 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 16 Mar 2020 13:43:37 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19h\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19h\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19h\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19h\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Mar 16 13:46:50.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-1148' Mar 16 13:46:50.175: INFO: stderr: "" Mar 16 13:46:50.175: INFO: stdout: "Name: kubectl-1148\nLabels: e2e-framework=kubectl\n e2e-run=47f29d42-c6ff-4dc9-a320-1ad8ab3df580\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:50.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1148" for this suite. • [SLOW TEST:6.790 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":158,"skipped":2387,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:50.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Mar 16 13:46:50.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9859' Mar 16 13:46:50.496: INFO: stderr: "" Mar 16 13:46:50.496: INFO: stdout: "pod/pause created\n" Mar 16 13:46:50.496: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 16 13:46:50.497: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9859" to be "running and ready" Mar 16 13:46:50.515: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.277556ms Mar 16 13:46:52.518: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02114933s Mar 16 13:46:54.532: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.035346737s Mar 16 13:46:54.532: INFO: Pod "pause" satisfied condition "running and ready" Mar 16 13:46:54.532: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Mar 16 13:46:54.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9859' Mar 16 13:46:54.628: INFO: stderr: "" Mar 16 13:46:54.628: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 16 13:46:54.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9859' Mar 16 13:46:54.730: INFO: stderr: "" Mar 16 13:46:54.730: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 16 13:46:54.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9859' Mar 16 13:46:54.829: INFO: stderr: "" Mar 16 13:46:54.829: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 16 13:46:54.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9859' Mar 16 13:46:54.952: INFO: stderr: "" Mar 16 13:46:54.952: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Mar 16 13:46:54.952: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9859' Mar 16 13:46:55.082: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 13:46:55.082: INFO: stdout: "pod \"pause\" force deleted\n" Mar 16 13:46:55.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9859' Mar 16 13:46:55.206: INFO: stderr: "No resources found in kubectl-9859 namespace.\n" Mar 16 13:46:55.206: INFO: stdout: "" Mar 16 13:46:55.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9859 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 13:46:55.364: INFO: stderr: "" Mar 16 13:46:55.365: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:46:55.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9859" for this suite. • [SLOW TEST:5.190 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":159,"skipped":2395,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:46:55.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-scpr STEP: Creating a pod to test atomic-volume-subpath Mar 16 13:46:55.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-scpr" in namespace "subpath-7664" to be "Succeeded or Failed" Mar 16 13:46:55.914: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184838ms Mar 16 13:46:57.928: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016863186s Mar 16 13:47:00.030: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119674791s Mar 16 13:47:02.034: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 6.123659453s Mar 16 13:47:04.038: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 8.127425009s Mar 16 13:47:06.042: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 10.131680967s Mar 16 13:47:08.047: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 12.135974908s Mar 16 13:47:10.050: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 14.139802698s Mar 16 13:47:12.055: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 16.143938704s Mar 16 13:47:14.058: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 18.147495183s Mar 16 13:47:16.062: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 20.151225931s Mar 16 13:47:18.066: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 22.15579224s Mar 16 13:47:20.070: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Running", Reason="", readiness=true. Elapsed: 24.159208969s Mar 16 13:47:22.073: INFO: Pod "pod-subpath-test-configmap-scpr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.16280127s STEP: Saw pod success Mar 16 13:47:22.074: INFO: Pod "pod-subpath-test-configmap-scpr" satisfied condition "Succeeded or Failed" Mar 16 13:47:22.076: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-scpr container test-container-subpath-configmap-scpr: STEP: delete the pod Mar 16 13:47:22.200: INFO: Waiting for pod pod-subpath-test-configmap-scpr to disappear Mar 16 13:47:22.233: INFO: Pod pod-subpath-test-configmap-scpr no longer exists STEP: Deleting pod pod-subpath-test-configmap-scpr Mar 16 13:47:22.233: INFO: Deleting pod "pod-subpath-test-configmap-scpr" in namespace "subpath-7664" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:47:22.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7664" for this suite. • [SLOW TEST:26.869 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":160,"skipped":2404,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:47:22.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-9e8ed038-6019-4f68-b943-95258c886ff9 STEP: Creating a pod to test consume configMaps Mar 16 13:47:22.332: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473" in namespace "projected-7577" to be "Succeeded or Failed" Mar 16 13:47:22.336: INFO: Pod "pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079747ms Mar 16 13:47:24.339: INFO: Pod "pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00778935s Mar 16 13:47:26.343: INFO: Pod "pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011166054s STEP: Saw pod success Mar 16 13:47:26.343: INFO: Pod "pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473" satisfied condition "Succeeded or Failed" Mar 16 13:47:26.345: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473 container projected-configmap-volume-test: STEP: delete the pod Mar 16 13:47:26.362: INFO: Waiting for pod pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473 to disappear Mar 16 13:47:26.366: INFO: Pod pod-projected-configmaps-a76621cc-eb34-435b-947e-93bf8b666473 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:47:26.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7577" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":161,"skipped":2408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:47:26.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4694 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-4694 I0316 13:47:26.578745 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-4694, replica count: 2 I0316 13:47:29.629262 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:47:32.629576 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:47:32.629: INFO: Creating new exec pod Mar 16 13:47:37.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4694 execpodvnzlm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 16 13:47:37.863: INFO: stderr: "I0316 13:47:37.791365 1704 log.go:172] (0xc00003a160) (0xc000542aa0) Create stream\nI0316 13:47:37.791421 1704 log.go:172] (0xc00003a160) (0xc000542aa0) Stream added, broadcasting: 1\nI0316 13:47:37.795391 1704 log.go:172] (0xc00003a160) Reply frame received for 1\nI0316 13:47:37.795466 1704 log.go:172] (0xc00003a160) (0xc000833220) Create stream\nI0316 13:47:37.795491 1704 log.go:172] (0xc00003a160) (0xc000833220) Stream added, broadcasting: 3\nI0316 13:47:37.798312 1704 log.go:172] (0xc00003a160) Reply frame received for 3\nI0316 13:47:37.798348 1704 log.go:172] (0xc00003a160) (0xc00094e000) Create stream\nI0316 13:47:37.798362 1704 log.go:172] (0xc00003a160) (0xc00094e000) Stream added, broadcasting: 5\nI0316 13:47:37.799584 1704 log.go:172] (0xc00003a160) Reply frame received for 5\nI0316 13:47:37.857829 1704 log.go:172] (0xc00003a160) Data frame received for 5\nI0316 13:47:37.857861 1704 log.go:172] (0xc00094e000) (5) Data frame handling\nI0316 13:47:37.857879 1704 log.go:172] (0xc00094e000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0316 13:47:37.858632 1704 log.go:172] (0xc00003a160) Data frame received for 5\nI0316 13:47:37.858655 1704 log.go:172] (0xc00094e000) (5) Data frame handling\nI0316 13:47:37.858678 1704 log.go:172] (0xc00094e000) (5) Data frame sent\nI0316 13:47:37.858704 1704 log.go:172] (0xc00003a160) Data frame received for 5\nI0316 13:47:37.858712 1704 log.go:172] (0xc00094e000) (5) Data frame handling\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0316 13:47:37.858895 1704 log.go:172] (0xc00003a160) Data frame received for 3\nI0316 13:47:37.858920 1704 log.go:172] (0xc000833220) (3) Data frame handling\nI0316 13:47:37.860368 1704 log.go:172] (0xc00003a160) Data frame received for 1\nI0316 13:47:37.860384 1704 log.go:172] (0xc000542aa0) (1) Data frame handling\nI0316 13:47:37.860402 1704 log.go:172] (0xc000542aa0) (1) Data frame sent\nI0316 13:47:37.860412 1704 log.go:172] (0xc00003a160) (0xc000542aa0) Stream removed, broadcasting: 1\nI0316 13:47:37.860464 1704 log.go:172] (0xc00003a160) Go away received\nI0316 13:47:37.860739 1704 log.go:172] (0xc00003a160) (0xc000542aa0) Stream removed, broadcasting: 1\nI0316 13:47:37.860755 1704 log.go:172] (0xc00003a160) (0xc000833220) Stream removed, broadcasting: 3\nI0316 13:47:37.860767 1704 log.go:172] (0xc00003a160) (0xc00094e000) Stream removed, broadcasting: 5\n" Mar 16 13:47:37.863: INFO: stdout: "" Mar 16 13:47:37.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4694 execpodvnzlm -- /bin/sh -x -c nc -zv -t -w 2 10.96.178.207 80' Mar 16 13:47:38.084: INFO: stderr: "I0316 13:47:37.999518 1725 log.go:172] (0xc000aaa000) (0xc0006a1400) Create stream\nI0316 13:47:37.999577 1725 log.go:172] (0xc000aaa000) (0xc0006a1400) Stream added, broadcasting: 1\nI0316 13:47:38.003251 1725 log.go:172] (0xc000aaa000) Reply frame received for 1\nI0316 13:47:38.003305 1725 log.go:172] (0xc000aaa000) (0xc0002eaaa0) Create stream\nI0316 13:47:38.003320 1725 log.go:172] (0xc000aaa000) (0xc0002eaaa0) Stream added, broadcasting: 3\nI0316 13:47:38.004296 1725 log.go:172] (0xc000aaa000) Reply frame received for 3\nI0316 13:47:38.004323 1725 log.go:172] (0xc000aaa000) (0xc0002eab40) Create stream\nI0316 13:47:38.004334 1725 log.go:172] (0xc000aaa000) (0xc0002eab40) Stream added, broadcasting: 5\nI0316 13:47:38.005423 1725 log.go:172] (0xc000aaa000) Reply frame received for 5\nI0316 13:47:38.077101 1725 log.go:172] (0xc000aaa000) Data frame received for 3\nI0316 13:47:38.077304 1725 log.go:172] (0xc0002eaaa0) (3) Data frame handling\nI0316 13:47:38.077348 1725 log.go:172] (0xc000aaa000) Data frame received for 5\nI0316 13:47:38.077376 1725 log.go:172] (0xc0002eab40) (5) Data frame handling\nI0316 13:47:38.077423 1725 log.go:172] (0xc0002eab40) (5) Data frame sent\nI0316 13:47:38.077453 1725 log.go:172] (0xc000aaa000) Data frame received for 5\nI0316 13:47:38.077472 1725 log.go:172] (0xc0002eab40) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.178.207 80\nConnection to 10.96.178.207 80 port [tcp/http] succeeded!\nI0316 13:47:38.079155 1725 log.go:172] (0xc000aaa000) Data frame received for 1\nI0316 13:47:38.079182 1725 log.go:172] (0xc0006a1400) (1) Data frame handling\nI0316 13:47:38.079196 1725 log.go:172] (0xc0006a1400) (1) Data frame sent\nI0316 13:47:38.079218 1725 log.go:172] (0xc000aaa000) (0xc0006a1400) Stream removed, broadcasting: 1\nI0316 13:47:38.079241 1725 log.go:172] (0xc000aaa000) Go away received\nI0316 13:47:38.079747 1725 log.go:172] (0xc000aaa000) (0xc0006a1400) Stream removed, broadcasting: 1\nI0316 13:47:38.079775 1725 log.go:172] (0xc000aaa000) (0xc0002eaaa0) Stream removed, broadcasting: 3\nI0316 13:47:38.079788 1725 log.go:172] (0xc000aaa000) (0xc0002eab40) Stream removed, broadcasting: 5\n" Mar 16 13:47:38.084: INFO: stdout: "" Mar 16 13:47:38.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4694 execpodvnzlm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31213' Mar 16 13:47:38.286: INFO: stderr: "I0316 13:47:38.214240 1745 log.go:172] (0xc0009a8000) (0xc000847400) Create stream\nI0316 13:47:38.214294 1745 log.go:172] (0xc0009a8000) (0xc000847400) Stream added, broadcasting: 1\nI0316 13:47:38.216199 1745 log.go:172] (0xc0009a8000) Reply frame received for 1\nI0316 13:47:38.216244 1745 log.go:172] (0xc0009a8000) (0xc000520c80) Create stream\nI0316 13:47:38.216254 1745 log.go:172] (0xc0009a8000) (0xc000520c80) Stream added, broadcasting: 3\nI0316 13:47:38.217408 1745 log.go:172] (0xc0009a8000) Reply frame received for 3\nI0316 13:47:38.217486 1745 log.go:172] (0xc0009a8000) (0xc000aaa000) Create stream\nI0316 13:47:38.217520 1745 log.go:172] (0xc0009a8000) (0xc000aaa000) Stream added, broadcasting: 5\nI0316 13:47:38.218383 1745 log.go:172] (0xc0009a8000) Reply frame received for 5\nI0316 13:47:38.280245 1745 log.go:172] (0xc0009a8000) Data frame received for 5\nI0316 13:47:38.280288 1745 log.go:172] (0xc000aaa000) (5) Data frame handling\nI0316 13:47:38.280325 1745 log.go:172] (0xc000aaa000) (5) Data frame sent\nI0316 13:47:38.280357 1745 log.go:172] (0xc0009a8000) Data frame received for 5\nI0316 13:47:38.280375 1745 log.go:172] (0xc000aaa000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31213\nConnection to 172.17.0.13 31213 port [tcp/31213] succeeded!\nI0316 13:47:38.280638 1745 log.go:172] (0xc0009a8000) Data frame received for 3\nI0316 13:47:38.280668 1745 log.go:172] (0xc000520c80) (3) Data frame handling\nI0316 13:47:38.282362 1745 log.go:172] (0xc0009a8000) Data frame received for 1\nI0316 13:47:38.282389 1745 log.go:172] (0xc000847400) (1) Data frame handling\nI0316 13:47:38.282421 1745 log.go:172] (0xc000847400) (1) Data frame sent\nI0316 13:47:38.282441 1745 log.go:172] (0xc0009a8000) (0xc000847400) Stream removed, broadcasting: 1\nI0316 13:47:38.282465 1745 log.go:172] (0xc0009a8000) Go away received\nI0316 13:47:38.282985 1745 log.go:172] (0xc0009a8000) (0xc000847400) Stream removed, broadcasting: 1\nI0316 13:47:38.283013 1745 log.go:172] (0xc0009a8000) (0xc000520c80) Stream removed, broadcasting: 3\nI0316 13:47:38.283026 1745 log.go:172] (0xc0009a8000) (0xc000aaa000) Stream removed, broadcasting: 5\n" Mar 16 13:47:38.286: INFO: stdout: "" Mar 16 13:47:38.287: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-4694 execpodvnzlm -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31213' Mar 16 13:47:38.493: INFO: stderr: "I0316 13:47:38.416929 1767 log.go:172] (0xc00003a630) (0xc00081d4a0) Create stream\nI0316 13:47:38.416981 1767 log.go:172] (0xc00003a630) (0xc00081d4a0) Stream added, broadcasting: 1\nI0316 13:47:38.419795 1767 log.go:172] (0xc00003a630) Reply frame received for 1\nI0316 13:47:38.419836 1767 log.go:172] (0xc00003a630) (0xc000a30000) Create stream\nI0316 13:47:38.419847 1767 log.go:172] (0xc00003a630) (0xc000a30000) Stream added, broadcasting: 3\nI0316 13:47:38.420844 1767 log.go:172] (0xc00003a630) Reply frame received for 3\nI0316 13:47:38.420889 1767 log.go:172] (0xc00003a630) (0xc00053ea00) Create stream\nI0316 13:47:38.420898 1767 log.go:172] (0xc00003a630) (0xc00053ea00) Stream added, broadcasting: 5\nI0316 13:47:38.421944 1767 log.go:172] (0xc00003a630) Reply frame received for 5\nI0316 13:47:38.488529 1767 log.go:172] (0xc00003a630) Data frame received for 5\nI0316 13:47:38.488569 1767 log.go:172] (0xc00053ea00) (5) Data frame handling\nI0316 13:47:38.488595 1767 log.go:172] (0xc00053ea00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 31213\nConnection to 172.17.0.12 31213 port [tcp/31213] succeeded!\nI0316 13:47:38.488967 1767 log.go:172] (0xc00003a630) Data frame received for 3\nI0316 13:47:38.488980 1767 log.go:172] (0xc000a30000) (3) Data frame handling\nI0316 13:47:38.489035 1767 log.go:172] (0xc00003a630) Data frame received for 5\nI0316 13:47:38.489064 1767 log.go:172] (0xc00053ea00) (5) Data frame handling\nI0316 13:47:38.490297 1767 log.go:172] (0xc00003a630) Data frame received for 1\nI0316 13:47:38.490311 1767 log.go:172] (0xc00081d4a0) (1) Data frame handling\nI0316 13:47:38.490323 1767 log.go:172] (0xc00081d4a0) (1) Data frame sent\nI0316 13:47:38.490335 1767 log.go:172] (0xc00003a630) (0xc00081d4a0) Stream removed, broadcasting: 1\nI0316 13:47:38.490376 1767 log.go:172] (0xc00003a630) Go away received\nI0316 13:47:38.490591 1767 log.go:172] (0xc00003a630) (0xc00081d4a0) Stream removed, broadcasting: 1\nI0316 13:47:38.490602 1767 log.go:172] (0xc00003a630) (0xc000a30000) Stream removed, broadcasting: 3\nI0316 13:47:38.490608 1767 log.go:172] (0xc00003a630) (0xc00053ea00) Stream removed, broadcasting: 5\n" Mar 16 13:47:38.494: INFO: stdout: "" Mar 16 13:47:38.494: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:47:38.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4694" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:12.210 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":162,"skipped":2468,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:47:38.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-373a840d-398d-4b19-8729-455393819969 STEP: Creating secret with name s-test-opt-upd-bdc864dc-a282-4d5f-8749-0ec7a59e8faf STEP: Creating the pod STEP: Deleting secret s-test-opt-del-373a840d-398d-4b19-8729-455393819969 STEP: Updating secret s-test-opt-upd-bdc864dc-a282-4d5f-8749-0ec7a59e8faf STEP: Creating secret with name s-test-opt-create-a1b4e74b-e28d-422d-a826-b359c28d3b02 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:48:47.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6553" for this suite. • [SLOW TEST:68.867 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2477,"failed":0} SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:48:47.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod test-webserver-c3a1f940-0ff3-45b6-8d6f-80b39e259014 in namespace container-probe-5348 Mar 16 13:48:51.521: INFO: Started pod test-webserver-c3a1f940-0ff3-45b6-8d6f-80b39e259014 in namespace container-probe-5348 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 13:48:51.524: INFO: Initial restart count of pod test-webserver-c3a1f940-0ff3-45b6-8d6f-80b39e259014 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:52:52.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5348" for this suite. • [SLOW TEST:244.636 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2479,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:52:52.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-a681c270-9ec7-45e9-95de-16ace2b23e8a STEP: Creating a pod to test consume configMaps Mar 16 13:52:52.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f" in namespace "configmap-7120" to be "Succeeded or Failed" Mar 16 13:52:52.431: INFO: Pod "pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f": Phase="Pending", Reason="", readiness=false. Elapsed: 213.369712ms Mar 16 13:52:54.435: INFO: Pod "pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217817292s Mar 16 13:52:56.439: INFO: Pod "pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.221577974s STEP: Saw pod success Mar 16 13:52:56.439: INFO: Pod "pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f" satisfied condition "Succeeded or Failed" Mar 16 13:52:56.442: INFO: Trying to get logs from node latest-worker pod pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f container configmap-volume-test: STEP: delete the pod Mar 16 13:52:56.484: INFO: Waiting for pod pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f to disappear Mar 16 13:52:56.502: INFO: Pod pod-configmaps-aa4025d5-bbe1-4fc7-9e56-a4d08ad4f13f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:52:56.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7120" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:52:56.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-l4g6h in namespace proxy-6012 I0316 13:52:56.606152 7 runners.go:190] Created replication controller with name: proxy-service-l4g6h, namespace: proxy-6012, replica count: 1 I0316 13:52:57.656679 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:52:58.656953 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:52:59.657298 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 13:53:00.657532 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:01.657771 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:02.658000 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:03.658270 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:04.658557 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:05.658939 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:06.659166 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:07.659415 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:08.659679 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:09.660057 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0316 13:53:10.660313 7 runners.go:190] proxy-service-l4g6h Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 16 13:53:10.663: INFO: setup took 14.088702587s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 16 13:53:10.671: INFO: (0) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 7.636123ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 8.110814ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 8.205856ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 8.346964ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 8.610687ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 8.767815ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 8.795983ms) Mar 16 13:53:10.672: INFO: (0) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 9.030673ms) Mar 16 13:53:10.675: INFO: (0) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 11.40785ms) Mar 16 13:53:10.676: INFO: (0) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 12.232856ms) Mar 16 13:53:10.676: INFO: (0) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 12.407813ms) Mar 16 13:53:10.680: INFO: (0) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 16.653655ms) Mar 16 13:53:10.680: INFO: (0) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 16.653048ms) Mar 16 13:53:10.680: INFO: (0) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 16.566549ms) Mar 16 13:53:10.680: INFO: (0) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 16.576845ms) Mar 16 13:53:10.680: INFO: (0) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 2.837784ms) Mar 16 13:53:10.683: INFO: (1) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 2.816305ms) Mar 16 13:53:10.683: INFO: (1) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 2.906449ms) Mar 16 13:53:10.683: INFO: (1) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 2.849194ms) Mar 16 13:53:10.683: INFO: (1) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 2.87022ms) Mar 16 13:53:10.684: INFO: (1) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 3.772527ms) Mar 16 13:53:10.684: INFO: (1) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 3.804474ms) Mar 16 13:53:10.684: INFO: (1) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: ... (200; 3.992532ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 4.201868ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.247109ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.288608ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.232555ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.278038ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 4.354333ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 4.353271ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 4.308517ms) Mar 16 13:53:10.689: INFO: (2) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 4.406759ms) Mar 16 13:53:10.691: INFO: (3) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 1.949019ms) Mar 16 13:53:10.694: INFO: (3) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.533049ms) Mar 16 13:53:10.694: INFO: (3) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.794858ms) Mar 16 13:53:10.694: INFO: (3) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 5.021791ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 5.259775ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 5.272375ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 5.286948ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.272073ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.444944ms) Mar 16 13:53:10.695: INFO: (3) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: ... (200; 4.604226ms) Mar 16 13:53:10.702: INFO: (4) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.628093ms) Mar 16 13:53:10.702: INFO: (4) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 4.693887ms) Mar 16 13:53:10.702: INFO: (4) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 5.512008ms) Mar 16 13:53:10.704: INFO: (4) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 6.007124ms) Mar 16 13:53:10.704: INFO: (4) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 5.996957ms) Mar 16 13:53:10.704: INFO: (4) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 6.006606ms) Mar 16 13:53:10.704: INFO: (4) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 6.003914ms) Mar 16 13:53:10.704: INFO: (4) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 6.015269ms) Mar 16 13:53:10.706: INFO: (5) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 2.65254ms) Mar 16 13:53:10.707: INFO: (5) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 2.81804ms) Mar 16 13:53:10.707: INFO: (5) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 2.833091ms) Mar 16 13:53:10.707: INFO: (5) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 3.355613ms) Mar 16 13:53:10.707: INFO: (5) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.632065ms) Mar 16 13:53:10.708: INFO: (5) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 4.089119ms) Mar 16 13:53:10.708: INFO: (5) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.10472ms) Mar 16 13:53:10.708: INFO: (5) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 4.537131ms) Mar 16 13:53:10.708: INFO: (5) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 4.60542ms) Mar 16 13:53:10.709: INFO: (5) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.847457ms) Mar 16 13:53:10.710: INFO: (5) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 5.888989ms) Mar 16 13:53:10.710: INFO: (5) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 5.819053ms) Mar 16 13:53:10.710: INFO: (5) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 5.938295ms) Mar 16 13:53:10.710: INFO: (5) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 6.484545ms) Mar 16 13:53:10.719: INFO: (6) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 8.379477ms) Mar 16 13:53:10.719: INFO: (6) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 8.406391ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 9.124138ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 9.377572ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 9.498337ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 9.474334ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 9.716686ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 9.902302ms) Mar 16 13:53:10.720: INFO: (6) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 9.9682ms) Mar 16 13:53:10.721: INFO: (6) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 11.965134ms) Mar 16 13:53:10.722: INFO: (6) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 11.905697ms) Mar 16 13:53:10.723: INFO: (6) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 12.10537ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.173949ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.247618ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 4.358878ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.248658ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.619323ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.617175ms) Mar 16 13:53:10.727: INFO: (7) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 4.649532ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.304857ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 5.515442ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.272612ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 5.517277ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 5.519994ms) Mar 16 13:53:10.728: INFO: (7) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 5.509386ms) Mar 16 13:53:10.731: INFO: (8) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 2.548895ms) Mar 16 13:53:10.731: INFO: (8) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 2.56157ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 3.458572ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 3.648534ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.693964ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 3.859006ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.768572ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 3.860393ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.002931ms) Mar 16 13:53:10.732: INFO: (8) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.084552ms) Mar 16 13:53:10.733: INFO: (8) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.190947ms) Mar 16 13:53:10.733: INFO: (8) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 4.299452ms) Mar 16 13:53:10.733: INFO: (8) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 4.36526ms) Mar 16 13:53:10.734: INFO: (8) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.380783ms) Mar 16 13:53:10.734: INFO: (8) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 5.39159ms) Mar 16 13:53:10.734: INFO: (8) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 2.317039ms) Mar 16 13:53:10.736: INFO: (9) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 2.28217ms) Mar 16 13:53:10.737: INFO: (9) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 2.939967ms) Mar 16 13:53:10.737: INFO: (9) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: ... (200; 4.383604ms) Mar 16 13:53:10.738: INFO: (9) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.496232ms) Mar 16 13:53:10.739: INFO: (9) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.438282ms) Mar 16 13:53:10.739: INFO: (9) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 5.274851ms) Mar 16 13:53:10.742: INFO: (10) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 2.684538ms) Mar 16 13:53:10.742: INFO: (10) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 2.807455ms) Mar 16 13:53:10.742: INFO: (10) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 2.867575ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.234447ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.207806ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 3.297287ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 3.465061ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 3.499758ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.719787ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.66648ms) Mar 16 13:53:10.743: INFO: (10) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 3.723761ms) Mar 16 13:53:10.744: INFO: (10) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 4.203375ms) Mar 16 13:53:10.744: INFO: (10) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 4.213631ms) Mar 16 13:53:10.744: INFO: (10) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.295236ms) Mar 16 13:53:10.744: INFO: (10) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.349529ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 3.46158ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.483659ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.506513ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.520429ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.598352ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 3.586504ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.656517ms) Mar 16 13:53:10.747: INFO: (11) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 3.677163ms) Mar 16 13:53:10.748: INFO: (11) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 3.642578ms) Mar 16 13:53:10.748: INFO: (11) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 5.203361ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 5.359232ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 5.302372ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 5.365927ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 5.396397ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 5.367222ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 5.493263ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 5.509615ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 5.617216ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.62803ms) Mar 16 13:53:10.755: INFO: (12) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 4.790036ms) Mar 16 13:53:10.760: INFO: (13) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.850565ms) Mar 16 13:53:10.760: INFO: (13) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.853831ms) Mar 16 13:53:10.760: INFO: (13) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.900138ms) Mar 16 13:53:10.760: INFO: (13) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.931546ms) Mar 16 13:53:10.760: INFO: (13) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 2.934294ms) Mar 16 13:53:10.763: INFO: (14) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.004798ms) Mar 16 13:53:10.763: INFO: (14) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.006293ms) Mar 16 13:53:10.763: INFO: (14) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 3.044587ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.454676ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.535446ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 4.587845ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 4.58436ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.582711ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 4.640466ms) Mar 16 13:53:10.765: INFO: (14) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 4.376932ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 4.451864ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.40143ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.444249ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.506751ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 4.418529ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 4.478141ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 4.545918ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 4.567445ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 4.55526ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 4.662407ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 4.693622ms) Mar 16 13:53:10.770: INFO: (15) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 4.724067ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 3.66227ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.714275ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 3.793218ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.804484ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.139469ms) Mar 16 13:53:10.774: INFO: (16) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test (200; 6.424319ms) Mar 16 13:53:10.777: INFO: (16) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 6.901744ms) Mar 16 13:53:10.781: INFO: (17) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 4.010955ms) Mar 16 13:53:10.782: INFO: (17) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 5.306842ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 5.449934ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 5.66973ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 5.724327ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 6.357109ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 6.356428ms) Mar 16 13:53:10.783: INFO: (17) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 6.379169ms) Mar 16 13:53:10.784: INFO: (17) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 6.604763ms) Mar 16 13:53:10.784: INFO: (17) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 6.814382ms) Mar 16 13:53:10.784: INFO: (17) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 6.875066ms) Mar 16 13:53:10.784: INFO: (17) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 6.919116ms) Mar 16 13:53:10.787: INFO: (18) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.158643ms) Mar 16 13:53:10.788: INFO: (18) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 3.708153ms) Mar 16 13:53:10.788: INFO: (18) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 3.756463ms) Mar 16 13:53:10.788: INFO: (18) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.77341ms) Mar 16 13:53:10.788: INFO: (18) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:1080/proxy/: test<... (200; 3.748585ms) Mar 16 13:53:10.788: INFO: (18) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.061947ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:462/proxy/: tls qux (200; 4.536802ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:162/proxy/: bar (200; 4.608548ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 4.590624ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 4.632779ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.781172ms) Mar 16 13:53:10.789: INFO: (18) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:443/proxy/: test<... (200; 3.825635ms) Mar 16 13:53:10.797: INFO: (19) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 3.88931ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/pods/proxy-service-l4g6h-ccr4s/proxy/: test (200; 3.954007ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:160/proxy/: foo (200; 4.055922ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/pods/https:proxy-service-l4g6h-ccr4s:460/proxy/: tls baz (200; 4.012699ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname2/proxy/: bar (200; 4.00946ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/pods/http:proxy-service-l4g6h-ccr4s:1080/proxy/: ... (200; 4.251615ms) Mar 16 13:53:10.798: INFO: (19) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname1/proxy/: foo (200; 4.780103ms) Mar 16 13:53:10.799: INFO: (19) /api/v1/namespaces/proxy-6012/services/http:proxy-service-l4g6h:portname1/proxy/: foo (200; 5.09269ms) Mar 16 13:53:10.799: INFO: (19) /api/v1/namespaces/proxy-6012/services/proxy-service-l4g6h:portname2/proxy/: bar (200; 5.104362ms) Mar 16 13:53:10.799: INFO: (19) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname2/proxy/: tls qux (200; 5.245019ms) Mar 16 13:53:10.799: INFO: (19) /api/v1/namespaces/proxy-6012/services/https:proxy-service-l4g6h:tlsportname1/proxy/: tls baz (200; 5.284627ms) STEP: deleting ReplicationController proxy-service-l4g6h in namespace proxy-6012, will wait for the garbage collector to delete the pods Mar 16 13:53:10.856: INFO: Deleting ReplicationController proxy-service-l4g6h took: 5.577957ms Mar 16 13:53:11.156: INFO: Terminating ReplicationController proxy-service-l4g6h pods took: 300.24992ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:53:13.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6012" for this suite. • [SLOW TEST:16.656 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":275,"completed":166,"skipped":2530,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:53:13.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-5814c6e1-5d18-4077-b2fb-c421db06c040 STEP: Creating a pod to test consume secrets Mar 16 13:53:13.336: INFO: Waiting up to 5m0s for pod "pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a" in namespace "secrets-2147" to be "Succeeded or Failed" Mar 16 13:53:13.348: INFO: Pod "pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.982536ms Mar 16 13:53:15.372: INFO: Pod "pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035871115s Mar 16 13:53:17.376: INFO: Pod "pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039991711s STEP: Saw pod success Mar 16 13:53:17.376: INFO: Pod "pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a" satisfied condition "Succeeded or Failed" Mar 16 13:53:17.379: INFO: Trying to get logs from node latest-worker pod pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a container secret-volume-test: STEP: delete the pod Mar 16 13:53:17.442: INFO: Waiting for pod pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a to disappear Mar 16 13:53:17.445: INFO: Pod pod-secrets-19d175e0-33bc-4732-92f4-3249016f660a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:53:17.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2147" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2549,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:53:17.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 16 13:53:17.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5022' Mar 16 13:53:20.613: INFO: stderr: "" Mar 16 13:53:20.614: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Mar 16 13:53:20.645: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5022' Mar 16 13:53:32.741: INFO: stderr: "" Mar 16 13:53:32.741: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:53:32.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5022" for this suite. • [SLOW TEST:15.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":168,"skipped":2553,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:53:32.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 16 13:53:32.804: INFO: Waiting up to 5m0s for pod "pod-19ab5220-48c9-4321-b3b4-3d02577d8d05" in namespace "emptydir-4578" to be "Succeeded or Failed" Mar 16 13:53:32.809: INFO: Pod "pod-19ab5220-48c9-4321-b3b4-3d02577d8d05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185671ms Mar 16 13:53:34.813: INFO: Pod "pod-19ab5220-48c9-4321-b3b4-3d02577d8d05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00862127s Mar 16 13:53:36.817: INFO: Pod "pod-19ab5220-48c9-4321-b3b4-3d02577d8d05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012624478s STEP: Saw pod success Mar 16 13:53:36.817: INFO: Pod "pod-19ab5220-48c9-4321-b3b4-3d02577d8d05" satisfied condition "Succeeded or Failed" Mar 16 13:53:36.821: INFO: Trying to get logs from node latest-worker2 pod pod-19ab5220-48c9-4321-b3b4-3d02577d8d05 container test-container: STEP: delete the pod Mar 16 13:53:36.852: INFO: Waiting for pod pod-19ab5220-48c9-4321-b3b4-3d02577d8d05 to disappear Mar 16 13:53:36.864: INFO: Pod pod-19ab5220-48c9-4321-b3b4-3d02577d8d05 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:53:36.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4578" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:53:36.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:53:36.953: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 16 13:53:41.966: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 13:53:41.966: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 16 13:53:44.483: INFO: Creating deployment "test-rollover-deployment" Mar 16 13:53:44.562: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 16 13:53:46.569: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 16 13:53:46.574: INFO: Ensure that both replica sets have 1 created replica Mar 16 13:53:46.579: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 16 13:53:46.584: INFO: Updating deployment test-rollover-deployment Mar 16 13:53:46.584: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 16 13:53:48.598: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 16 13:53:48.604: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 16 13:53:48.609: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:48.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963626, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:53:50.615: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:50.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:53:52.616: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:52.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:53:54.618: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:54.618: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:53:56.615: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:56.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:53:58.617: INFO: all replica sets need to contain the pod-template-hash label Mar 16 13:53:58.617: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963629, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963624, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 13:54:00.639: INFO: Mar 16 13:54:00.640: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 16 13:54:00.646: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5259 /apis/apps/v1/namespaces/deployment-5259/deployments/test-rollover-deployment 4a54be5c-b861-4215-97bb-e138564b938a 283766 2 2020-03-16 13:53:44 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a65158 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-16 13:53:44 +0000 UTC,LastTransitionTime:2020-03-16 13:53:44 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-03-16 13:53:59 +0000 UTC,LastTransitionTime:2020-03-16 13:53:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 16 13:54:00.648: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-5259 /apis/apps/v1/namespaces/deployment-5259/replicasets/test-rollover-deployment-78df7bc796 8d2e87fd-f16a-47e4-a184-05877030d557 283755 2 2020-03-16 13:53:46 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4a54be5c-b861-4215-97bb-e138564b938a 0xc003a65637 0xc003a65638}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a656a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:54:00.648: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 16 13:54:00.649: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5259 /apis/apps/v1/namespaces/deployment-5259/replicasets/test-rollover-controller 8d10707d-43a3-4931-a4a9-4115a9a70189 283765 2 2020-03-16 13:53:36 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4a54be5c-b861-4215-97bb-e138564b938a 0xc003a65567 0xc003a65568}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003a655c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:54:00.649: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5259 /apis/apps/v1/namespaces/deployment-5259/replicasets/test-rollover-deployment-f6c94f66c 58f58ed1-72c2-411c-9cc7-1370dab26046 283708 2 2020-03-16 13:53:44 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4a54be5c-b861-4215-97bb-e138564b938a 0xc003a65710 0xc003a65711}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a65788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 13:54:00.651: INFO: Pod "test-rollover-deployment-78df7bc796-bbjn9" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-bbjn9 test-rollover-deployment-78df7bc796- deployment-5259 /api/v1/namespaces/deployment-5259/pods/test-rollover-deployment-78df7bc796-bbjn9 a20997a4-91ff-4b16-b43b-bbb08b98a6f7 283722 0 2020-03-16 13:53:46 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 8d2e87fd-f16a-47e4-a184-05877030d557 0xc0028e8017 0xc0028e8018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-v597h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-v597h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-v597h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:53:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:53:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:53:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 13:53:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.74,StartTime:2020-03-16 13:53:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 13:53:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://c57ae7da25a667dcc235a0dccc3fd41f515c4b2b642f588333399dfa6ae2a27c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:00.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5259" for this suite. • [SLOW TEST:23.765 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":170,"skipped":2625,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:00.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 13:54:04.937: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:05.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7652" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":171,"skipped":2632,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:05.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:54:05.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67" in namespace "downward-api-5124" to be "Succeeded or Failed" Mar 16 13:54:05.230: INFO: Pod "downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67": Phase="Pending", Reason="", readiness=false. Elapsed: 11.574988ms Mar 16 13:54:07.257: INFO: Pod "downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039111589s Mar 16 13:54:09.263: INFO: Pod "downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044832027s STEP: Saw pod success Mar 16 13:54:09.263: INFO: Pod "downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67" satisfied condition "Succeeded or Failed" Mar 16 13:54:09.265: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67 container client-container: STEP: delete the pod Mar 16 13:54:09.301: INFO: Waiting for pod downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67 to disappear Mar 16 13:54:09.313: INFO: Pod downwardapi-volume-bd92d25c-9dac-47da-ae5c-8f9145c13e67 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:09.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5124" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2640,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:09.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:09.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6093" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":173,"skipped":2651,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:09.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:26.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8248" for this suite. • [SLOW TEST:17.089 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":174,"skipped":2658,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:26.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Mar 16 13:54:27.085: INFO: created pod pod-service-account-defaultsa Mar 16 13:54:27.085: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 16 13:54:27.093: INFO: created pod pod-service-account-mountsa Mar 16 13:54:27.093: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 16 13:54:27.115: INFO: created pod pod-service-account-nomountsa Mar 16 13:54:27.115: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 16 13:54:27.129: INFO: created pod pod-service-account-defaultsa-mountspec Mar 16 13:54:27.129: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 16 13:54:27.176: INFO: created pod pod-service-account-mountsa-mountspec Mar 16 13:54:27.176: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 16 13:54:27.216: INFO: created pod pod-service-account-nomountsa-mountspec Mar 16 13:54:27.216: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 16 13:54:27.242: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 16 13:54:27.242: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 16 13:54:27.271: INFO: created pod pod-service-account-mountsa-nomountspec Mar 16 13:54:27.271: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 16 13:54:27.285: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 16 13:54:27.285: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:27.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1376" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":175,"skipped":2661,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:27.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Mar 16 13:54:27.639: INFO: Waiting up to 5m0s for pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c" in namespace "var-expansion-1729" to be "Succeeded or Failed" Mar 16 13:54:27.677: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.973223ms Mar 16 13:54:29.869: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230651952s Mar 16 13:54:32.078: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439512692s Mar 16 13:54:34.176: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537465704s Mar 16 13:54:36.530: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.890809619s Mar 16 13:54:38.534: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Running", Reason="", readiness=true. Elapsed: 10.894934995s Mar 16 13:54:40.538: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.899475283s STEP: Saw pod success Mar 16 13:54:40.538: INFO: Pod "var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c" satisfied condition "Succeeded or Failed" Mar 16 13:54:40.542: INFO: Trying to get logs from node latest-worker pod var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c container dapi-container: STEP: delete the pod Mar 16 13:54:40.559: INFO: Waiting for pod var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c to disappear Mar 16 13:54:40.564: INFO: Pod var-expansion-804bdfad-0da6-4e00-a48d-b7b3d4ad358c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:40.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1729" for this suite. • [SLOW TEST:13.203 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":176,"skipped":2678,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:40.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:54:40.680: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51" in namespace "security-context-test-1314" to be "Succeeded or Failed" Mar 16 13:54:40.690: INFO: Pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072571ms Mar 16 13:54:42.695: INFO: Pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01459178s Mar 16 13:54:44.699: INFO: Pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018989128s Mar 16 13:54:46.703: INFO: Pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022576834s Mar 16 13:54:46.703: INFO: Pod "alpine-nnp-false-97388e20-0b7d-43a0-be2b-df903d152b51" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:54:46.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1314" for this suite. • [SLOW TEST:6.145 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":177,"skipped":2679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:54:46.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2182.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 156.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.156_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2182.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2182.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2182.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2182.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 156.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.156_udp@PTR;check="$$(dig +tcp +noall +answer +search 156.100.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.100.156_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 13:54:52.974: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:52.977: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:52.980: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:52.983: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:53.003: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:53.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:53.010: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:53.012: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:53.026: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:54:58.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.035: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.074: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.077: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.080: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.084: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:54:58.101: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:55:03.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.035: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.066: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.072: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:03.095: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:55:08.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.063: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.068: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.071: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.074: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:08.092: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:55:13.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.034: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.037: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.040: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.060: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.063: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.065: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:13.086: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:55:18.031: INFO: Unable to read wheezy_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.034: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.038: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.041: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.064: INFO: Unable to read jessie_udp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.075: INFO: Unable to read jessie_tcp@dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.078: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.082: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local from pod dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e: the server could not find the requested resource (get pods dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e) Mar 16 13:55:18.101: INFO: Lookups using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e failed for: [wheezy_udp@dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@dns-test-service.dns-2182.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_udp@dns-test-service.dns-2182.svc.cluster.local jessie_tcp@dns-test-service.dns-2182.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2182.svc.cluster.local] Mar 16 13:55:23.105: INFO: DNS probes using dns-2182/dns-test-cb5b0630-9a10-489e-bbbc-d9390ab8911e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:55:24.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2182" for this suite. • [SLOW TEST:37.473 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":178,"skipped":2777,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:55:24.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:55:25.977: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:55:28.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963726, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963726, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963726, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963725, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:55:31.038: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 16 13:55:35.095: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-2560 to-be-attached-pod -i -c=container1' Mar 16 13:55:35.210: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:55:35.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2560" for this suite. STEP: Destroying namespace "webhook-2560-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.146 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":179,"skipped":2834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:55:35.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 16 13:55:35.466: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284407 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:55:35.466: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284408 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:55:35.466: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284409 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 16 13:55:45.491: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284464 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:55:45.491: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284465 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 13:55:45.491: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-9479 /api/v1/namespaces/watch-9479/configmaps/e2e-watch-test-label-changed d23bd6ce-07ff-4f47-acb3-47115a3ac49d 284466 0 2020-03-16 13:55:35 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:55:45.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9479" for this suite. • [SLOW TEST:10.168 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":180,"skipped":2857,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:55:45.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 16 13:55:49.581: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2615 PodName:pod-sharedvolume-02d1ed96-11c5-45a9-87eb-4291bef83269 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 16 13:55:49.581: INFO: >>> kubeConfig: /root/.kube/config I0316 13:55:49.622821 7 log.go:172] (0xc002be29a0) (0xc0019fd9a0) Create stream I0316 13:55:49.622848 7 log.go:172] (0xc002be29a0) (0xc0019fd9a0) Stream added, broadcasting: 1 I0316 13:55:49.626214 7 log.go:172] (0xc002be29a0) Reply frame received for 1 I0316 13:55:49.626272 7 log.go:172] (0xc002be29a0) (0xc001aada40) Create stream I0316 13:55:49.626296 7 log.go:172] (0xc002be29a0) (0xc001aada40) Stream added, broadcasting: 3 I0316 13:55:49.627711 7 log.go:172] (0xc002be29a0) Reply frame received for 3 I0316 13:55:49.627745 7 log.go:172] (0xc002be29a0) (0xc0019fdae0) Create stream I0316 13:55:49.627763 7 log.go:172] (0xc002be29a0) (0xc0019fdae0) Stream added, broadcasting: 5 I0316 13:55:49.629954 7 log.go:172] (0xc002be29a0) Reply frame received for 5 I0316 13:55:49.697850 7 log.go:172] (0xc002be29a0) Data frame received for 5 I0316 13:55:49.697883 7 log.go:172] (0xc0019fdae0) (5) Data frame handling I0316 13:55:49.697902 7 log.go:172] (0xc002be29a0) Data frame received for 3 I0316 13:55:49.697911 7 log.go:172] (0xc001aada40) (3) Data frame handling I0316 13:55:49.697919 7 log.go:172] (0xc001aada40) (3) Data frame sent I0316 13:55:49.698084 7 log.go:172] (0xc002be29a0) Data frame received for 3 I0316 13:55:49.698099 7 log.go:172] (0xc001aada40) (3) Data frame handling I0316 13:55:49.699879 7 log.go:172] (0xc002be29a0) Data frame received for 1 I0316 13:55:49.699959 7 log.go:172] (0xc0019fd9a0) (1) Data frame handling I0316 13:55:49.699998 7 log.go:172] (0xc0019fd9a0) (1) Data frame sent I0316 13:55:49.700022 7 log.go:172] (0xc002be29a0) (0xc0019fd9a0) Stream removed, broadcasting: 1 I0316 13:55:49.700047 7 log.go:172] (0xc002be29a0) Go away received I0316 13:55:49.700197 7 log.go:172] (0xc002be29a0) (0xc0019fd9a0) Stream removed, broadcasting: 1 I0316 13:55:49.700232 7 log.go:172] (0xc002be29a0) (0xc001aada40) Stream removed, broadcasting: 3 I0316 13:55:49.700273 7 log.go:172] (0xc002be29a0) (0xc0019fdae0) Stream removed, broadcasting: 5 Mar 16 13:55:49.700: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:55:49.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2615" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":181,"skipped":2879,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:55:49.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1399 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 16 13:55:49.850: INFO: Found 0 stateful pods, waiting for 3 Mar 16 13:55:59.895: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:59.895: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:59.895: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 16 13:55:59.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1399 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:56:00.202: INFO: stderr: "I0316 13:56:00.060425 1866 log.go:172] (0xc0009aa6e0) (0xc0006934a0) Create stream\nI0316 13:56:00.060499 1866 log.go:172] (0xc0009aa6e0) (0xc0006934a0) Stream added, broadcasting: 1\nI0316 13:56:00.064197 1866 log.go:172] (0xc0009aa6e0) Reply frame received for 1\nI0316 13:56:00.064245 1866 log.go:172] (0xc0009aa6e0) (0xc0008e8000) Create stream\nI0316 13:56:00.064257 1866 log.go:172] (0xc0009aa6e0) (0xc0008e8000) Stream added, broadcasting: 3\nI0316 13:56:00.065374 1866 log.go:172] (0xc0009aa6e0) Reply frame received for 3\nI0316 13:56:00.065414 1866 log.go:172] (0xc0009aa6e0) (0xc000693540) Create stream\nI0316 13:56:00.065425 1866 log.go:172] (0xc0009aa6e0) (0xc000693540) Stream added, broadcasting: 5\nI0316 13:56:00.066604 1866 log.go:172] (0xc0009aa6e0) Reply frame received for 5\nI0316 13:56:00.166593 1866 log.go:172] (0xc0009aa6e0) Data frame received for 5\nI0316 13:56:00.166646 1866 log.go:172] (0xc000693540) (5) Data frame handling\nI0316 13:56:00.166687 1866 log.go:172] (0xc000693540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:56:00.193820 1866 log.go:172] (0xc0009aa6e0) Data frame received for 3\nI0316 13:56:00.193867 1866 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0316 13:56:00.193889 1866 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0316 13:56:00.193909 1866 log.go:172] (0xc0009aa6e0) Data frame received for 3\nI0316 13:56:00.193925 1866 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0316 13:56:00.194739 1866 log.go:172] (0xc0009aa6e0) Data frame received for 5\nI0316 13:56:00.194766 1866 log.go:172] (0xc000693540) (5) Data frame handling\nI0316 13:56:00.196857 1866 log.go:172] (0xc0009aa6e0) Data frame received for 1\nI0316 13:56:00.196895 1866 log.go:172] (0xc0006934a0) (1) Data frame handling\nI0316 13:56:00.196920 1866 log.go:172] (0xc0006934a0) (1) Data frame sent\nI0316 13:56:00.196952 1866 log.go:172] (0xc0009aa6e0) (0xc0006934a0) Stream removed, broadcasting: 1\nI0316 13:56:00.197073 1866 log.go:172] (0xc0009aa6e0) Go away received\nI0316 13:56:00.197680 1866 log.go:172] (0xc0009aa6e0) (0xc0006934a0) Stream removed, broadcasting: 1\nI0316 13:56:00.197714 1866 log.go:172] (0xc0009aa6e0) (0xc0008e8000) Stream removed, broadcasting: 3\nI0316 13:56:00.197731 1866 log.go:172] (0xc0009aa6e0) (0xc000693540) Stream removed, broadcasting: 5\n" Mar 16 13:56:00.202: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:56:00.202: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 16 13:56:10.233: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 16 13:56:20.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1399 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 13:56:20.491: INFO: stderr: "I0316 13:56:20.410010 1887 log.go:172] (0xc0000ea370) (0xc0008be000) Create stream\nI0316 13:56:20.410081 1887 log.go:172] (0xc0000ea370) (0xc0008be000) Stream added, broadcasting: 1\nI0316 13:56:20.414045 1887 log.go:172] (0xc0000ea370) Reply frame received for 1\nI0316 13:56:20.414121 1887 log.go:172] (0xc0000ea370) (0xc0005f7860) Create stream\nI0316 13:56:20.414145 1887 log.go:172] (0xc0000ea370) (0xc0005f7860) Stream added, broadcasting: 3\nI0316 13:56:20.414954 1887 log.go:172] (0xc0000ea370) Reply frame received for 3\nI0316 13:56:20.414996 1887 log.go:172] (0xc0000ea370) (0xc000480c80) Create stream\nI0316 13:56:20.415017 1887 log.go:172] (0xc0000ea370) (0xc000480c80) Stream added, broadcasting: 5\nI0316 13:56:20.415759 1887 log.go:172] (0xc0000ea370) Reply frame received for 5\nI0316 13:56:20.484247 1887 log.go:172] (0xc0000ea370) Data frame received for 3\nI0316 13:56:20.484274 1887 log.go:172] (0xc0005f7860) (3) Data frame handling\nI0316 13:56:20.484284 1887 log.go:172] (0xc0005f7860) (3) Data frame sent\nI0316 13:56:20.484315 1887 log.go:172] (0xc0000ea370) Data frame received for 5\nI0316 13:56:20.484348 1887 log.go:172] (0xc000480c80) (5) Data frame handling\nI0316 13:56:20.484383 1887 log.go:172] (0xc000480c80) (5) Data frame sent\nI0316 13:56:20.484406 1887 log.go:172] (0xc0000ea370) Data frame received for 5\nI0316 13:56:20.484423 1887 log.go:172] (0xc000480c80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 13:56:20.484665 1887 log.go:172] (0xc0000ea370) Data frame received for 3\nI0316 13:56:20.484699 1887 log.go:172] (0xc0005f7860) (3) Data frame handling\nI0316 13:56:20.486261 1887 log.go:172] (0xc0000ea370) Data frame received for 1\nI0316 13:56:20.486288 1887 log.go:172] (0xc0008be000) (1) Data frame handling\nI0316 13:56:20.486305 1887 log.go:172] (0xc0008be000) (1) Data frame sent\nI0316 13:56:20.486321 1887 log.go:172] (0xc0000ea370) (0xc0008be000) Stream removed, broadcasting: 1\nI0316 13:56:20.486369 1887 log.go:172] (0xc0000ea370) Go away received\nI0316 13:56:20.486882 1887 log.go:172] (0xc0000ea370) (0xc0008be000) Stream removed, broadcasting: 1\nI0316 13:56:20.486906 1887 log.go:172] (0xc0000ea370) (0xc0005f7860) Stream removed, broadcasting: 3\nI0316 13:56:20.486918 1887 log.go:172] (0xc0000ea370) (0xc000480c80) Stream removed, broadcasting: 5\n" Mar 16 13:56:20.491: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 13:56:20.491: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 13:56:30.538: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:56:30.538: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 13:56:30.538: INFO: Waiting for Pod statefulset-1399/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 13:56:30.538: INFO: Waiting for Pod statefulset-1399/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 13:56:40.580: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:56:40.580: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 13:56:50.546: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:56:50.546: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 16 13:57:00.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1399 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 13:57:00.918: INFO: stderr: "I0316 13:57:00.736256 1907 log.go:172] (0xc0009d8000) (0xc000aa2000) Create stream\nI0316 13:57:00.736331 1907 log.go:172] (0xc0009d8000) (0xc000aa2000) Stream added, broadcasting: 1\nI0316 13:57:00.739157 1907 log.go:172] (0xc0009d8000) Reply frame received for 1\nI0316 13:57:00.739223 1907 log.go:172] (0xc0009d8000) (0xc0005bf680) Create stream\nI0316 13:57:00.739240 1907 log.go:172] (0xc0009d8000) (0xc0005bf680) Stream added, broadcasting: 3\nI0316 13:57:00.741050 1907 log.go:172] (0xc0009d8000) Reply frame received for 3\nI0316 13:57:00.741074 1907 log.go:172] (0xc0009d8000) (0xc000aa20a0) Create stream\nI0316 13:57:00.741083 1907 log.go:172] (0xc0009d8000) (0xc000aa20a0) Stream added, broadcasting: 5\nI0316 13:57:00.742152 1907 log.go:172] (0xc0009d8000) Reply frame received for 5\nI0316 13:57:00.795020 1907 log.go:172] (0xc0009d8000) Data frame received for 5\nI0316 13:57:00.795046 1907 log.go:172] (0xc000aa20a0) (5) Data frame handling\nI0316 13:57:00.795065 1907 log.go:172] (0xc000aa20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 13:57:00.910765 1907 log.go:172] (0xc0009d8000) Data frame received for 3\nI0316 13:57:00.910795 1907 log.go:172] (0xc0005bf680) (3) Data frame handling\nI0316 13:57:00.910814 1907 log.go:172] (0xc0005bf680) (3) Data frame sent\nI0316 13:57:00.911221 1907 log.go:172] (0xc0009d8000) Data frame received for 5\nI0316 13:57:00.911233 1907 log.go:172] (0xc000aa20a0) (5) Data frame handling\nI0316 13:57:00.911295 1907 log.go:172] (0xc0009d8000) Data frame received for 3\nI0316 13:57:00.911335 1907 log.go:172] (0xc0005bf680) (3) Data frame handling\nI0316 13:57:00.912921 1907 log.go:172] (0xc0009d8000) Data frame received for 1\nI0316 13:57:00.912933 1907 log.go:172] (0xc000aa2000) (1) Data frame handling\nI0316 13:57:00.912940 1907 log.go:172] (0xc000aa2000) (1) Data frame sent\nI0316 13:57:00.913417 1907 log.go:172] (0xc0009d8000) (0xc000aa2000) Stream removed, broadcasting: 1\nI0316 13:57:00.913496 1907 log.go:172] (0xc0009d8000) Go away received\nI0316 13:57:00.913839 1907 log.go:172] (0xc0009d8000) (0xc000aa2000) Stream removed, broadcasting: 1\nI0316 13:57:00.913861 1907 log.go:172] (0xc0009d8000) (0xc0005bf680) Stream removed, broadcasting: 3\nI0316 13:57:00.913875 1907 log.go:172] (0xc0009d8000) (0xc000aa20a0) Stream removed, broadcasting: 5\n" Mar 16 13:57:00.918: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 13:57:00.918: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 13:57:10.947: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 16 13:57:21.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1399 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 13:57:21.289: INFO: stderr: "I0316 13:57:21.188150 1929 log.go:172] (0xc000b28580) (0xc000687360) Create stream\nI0316 13:57:21.188206 1929 log.go:172] (0xc000b28580) (0xc000687360) Stream added, broadcasting: 1\nI0316 13:57:21.191254 1929 log.go:172] (0xc000b28580) Reply frame received for 1\nI0316 13:57:21.191322 1929 log.go:172] (0xc000b28580) (0xc0008f6000) Create stream\nI0316 13:57:21.191344 1929 log.go:172] (0xc000b28580) (0xc0008f6000) Stream added, broadcasting: 3\nI0316 13:57:21.192492 1929 log.go:172] (0xc000b28580) Reply frame received for 3\nI0316 13:57:21.192534 1929 log.go:172] (0xc000b28580) (0xc0008ce000) Create stream\nI0316 13:57:21.192549 1929 log.go:172] (0xc000b28580) (0xc0008ce000) Stream added, broadcasting: 5\nI0316 13:57:21.193603 1929 log.go:172] (0xc000b28580) Reply frame received for 5\nI0316 13:57:21.284168 1929 log.go:172] (0xc000b28580) Data frame received for 5\nI0316 13:57:21.284190 1929 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0316 13:57:21.284197 1929 log.go:172] (0xc0008ce000) (5) Data frame sent\nI0316 13:57:21.284204 1929 log.go:172] (0xc000b28580) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 13:57:21.284227 1929 log.go:172] (0xc000b28580) Data frame received for 3\nI0316 13:57:21.284269 1929 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0316 13:57:21.284294 1929 log.go:172] (0xc0008f6000) (3) Data frame sent\nI0316 13:57:21.284313 1929 log.go:172] (0xc000b28580) Data frame received for 3\nI0316 13:57:21.284331 1929 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0316 13:57:21.284364 1929 log.go:172] (0xc0008ce000) (5) Data frame handling\nI0316 13:57:21.285810 1929 log.go:172] (0xc000b28580) Data frame received for 1\nI0316 13:57:21.285830 1929 log.go:172] (0xc000687360) (1) Data frame handling\nI0316 13:57:21.285851 1929 log.go:172] (0xc000687360) (1) Data frame sent\nI0316 13:57:21.285861 1929 log.go:172] (0xc000b28580) (0xc000687360) Stream removed, broadcasting: 1\nI0316 13:57:21.285908 1929 log.go:172] (0xc000b28580) Go away received\nI0316 13:57:21.286098 1929 log.go:172] (0xc000b28580) (0xc000687360) Stream removed, broadcasting: 1\nI0316 13:57:21.286108 1929 log.go:172] (0xc000b28580) (0xc0008f6000) Stream removed, broadcasting: 3\nI0316 13:57:21.286114 1929 log.go:172] (0xc000b28580) (0xc0008ce000) Stream removed, broadcasting: 5\n" Mar 16 13:57:21.289: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 13:57:21.289: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 13:57:31.312: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:57:31.312: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 16 13:57:31.312: INFO: Waiting for Pod statefulset-1399/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 16 13:57:31.312: INFO: Waiting for Pod statefulset-1399/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 16 13:57:41.320: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:57:41.320: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 16 13:57:41.320: INFO: Waiting for Pod statefulset-1399/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Mar 16 13:57:51.319: INFO: Waiting for StatefulSet statefulset-1399/ss2 to complete update Mar 16 13:57:51.319: INFO: Waiting for Pod statefulset-1399/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 13:58:01.320: INFO: Deleting all statefulset in ns statefulset-1399 Mar 16 13:58:01.323: INFO: Scaling statefulset ss2 to 0 Mar 16 13:58:31.344: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 13:58:31.347: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:58:31.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1399" for this suite. • [SLOW TEST:161.658 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":182,"skipped":2883,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:58:31.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 13:58:31.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8" in namespace "projected-8034" to be "Succeeded or Failed" Mar 16 13:58:31.483: INFO: Pod "downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 61.343587ms Mar 16 13:58:33.487: INFO: Pod "downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065358872s Mar 16 13:58:35.491: INFO: Pod "downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070164295s STEP: Saw pod success Mar 16 13:58:35.492: INFO: Pod "downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8" satisfied condition "Succeeded or Failed" Mar 16 13:58:35.495: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8 container client-container: STEP: delete the pod Mar 16 13:58:35.546: INFO: Waiting for pod downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8 to disappear Mar 16 13:58:35.557: INFO: Pod downwardapi-volume-cfb965b6-a67d-4db7-80e6-f634473cd3c8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:58:35.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8034" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":2902,"failed":0} SSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:58:35.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 16 13:58:42.158: INFO: Successfully updated pod "adopt-release-8c4cs" STEP: Checking that the Job readopts the Pod Mar 16 13:58:42.158: INFO: Waiting up to 15m0s for pod "adopt-release-8c4cs" in namespace "job-8628" to be "adopted" Mar 16 13:58:42.165: INFO: Pod "adopt-release-8c4cs": Phase="Running", Reason="", readiness=true. Elapsed: 7.1175ms Mar 16 13:58:44.169: INFO: Pod "adopt-release-8c4cs": Phase="Running", Reason="", readiness=true. Elapsed: 2.010866706s Mar 16 13:58:44.169: INFO: Pod "adopt-release-8c4cs" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 16 13:58:44.676: INFO: Successfully updated pod "adopt-release-8c4cs" STEP: Checking that the Job releases the Pod Mar 16 13:58:44.676: INFO: Waiting up to 15m0s for pod "adopt-release-8c4cs" in namespace "job-8628" to be "released" Mar 16 13:58:44.771: INFO: Pod "adopt-release-8c4cs": Phase="Running", Reason="", readiness=true. Elapsed: 95.054379ms Mar 16 13:58:46.775: INFO: Pod "adopt-release-8c4cs": Phase="Running", Reason="", readiness=true. Elapsed: 2.099191675s Mar 16 13:58:46.775: INFO: Pod "adopt-release-8c4cs" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:58:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8628" for this suite. • [SLOW TEST:11.258 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":184,"skipped":2905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:58:46.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:58:47.494: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:58:49.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963927, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963927, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963927, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963927, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:58:52.561: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:58:52.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:58:53.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6880" for this suite. STEP: Destroying namespace "webhook-6880-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.010 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":185,"skipped":2937,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:58:53.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:59:10.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7864" for this suite. • [SLOW TEST:16.302 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":186,"skipped":2953,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:59:10.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 13:59:10.887: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 13:59:12.898: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963950, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963950, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963950, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719963950, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 13:59:15.929: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 13:59:15.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4249-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:59:17.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9847" for this suite. STEP: Destroying namespace "webhook-9847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.052 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":187,"skipped":2961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:59:17.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d Mar 16 13:59:17.336: INFO: Pod name my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d: Found 0 pods out of 1 Mar 16 13:59:22.399: INFO: Pod name my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d: Found 1 pods out of 1 Mar 16 13:59:22.399: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d" are running Mar 16 13:59:22.421: INFO: Pod "my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d-64lsl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:59:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:59:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:59:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-16 13:59:17 +0000 UTC Reason: Message:}]) Mar 16 13:59:22.421: INFO: Trying to dial the pod Mar 16 13:59:27.431: INFO: Controller my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d: Got expected result from replica 1 [my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d-64lsl]: "my-hostname-basic-d01cc945-243f-4301-8fb3-1ba7e0c47c6d-64lsl", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:59:27.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1233" for this suite. • [SLOW TEST:10.252 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":188,"skipped":3032,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:59:27.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 13:59:57.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9726" for this suite. • [SLOW TEST:29.586 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3039,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 13:59:57.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Mar 16 14:00:01.910: INFO: Successfully updated pod "annotationupdate370a06e1-e4ae-453e-b952-2f4f5528ce88" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:00:03.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8775" for this suite. • [SLOW TEST:6.910 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3041,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:00:03.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 14:00:04.515: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 14:00:06.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964004, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964004, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964004, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964004, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 14:00:09.610: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:00:10.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3080" for this suite. STEP: Destroying namespace "webhook-3080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.029 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":191,"skipped":3055,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:00:10.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 14:00:19.648: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:19.654: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:21.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:21.657: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:23.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:23.659: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:25.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:25.658: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:27.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:27.658: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:29.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:29.659: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:31.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:31.658: INFO: Pod pod-with-poststart-exec-hook still exists Mar 16 14:00:33.654: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 16 14:00:33.658: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:00:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8707" for this suite. • [SLOW TEST:22.700 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3070,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:00:33.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 16 14:00:38.769: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:00:38.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7905" for this suite. • [SLOW TEST:5.192 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":193,"skipped":3113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:00:38.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:00:54.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-756" for this suite. • [SLOW TEST:16.105 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":194,"skipped":3163,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:00:54.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7039" for this suite. STEP: Destroying namespace "nsdeletetest-8920" for this suite. Mar 16 14:01:26.531: INFO: Namespace nsdeletetest-8920 was already deleted STEP: Destroying namespace "nsdeletetest-3259" for this suite. • [SLOW TEST:31.569 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":195,"skipped":3170,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:26.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Mar 16 14:01:26.593: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1079" to be "Succeeded or Failed" Mar 16 14:01:26.597: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.264743ms Mar 16 14:01:28.601: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007628117s Mar 16 14:01:30.605: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012008293s Mar 16 14:01:32.609: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015837474s STEP: Saw pod success Mar 16 14:01:32.609: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Mar 16 14:01:32.612: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 16 14:01:32.628: INFO: Waiting for pod pod-host-path-test to disappear Mar 16 14:01:32.632: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:32.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1079" for this suite. • [SLOW TEST:6.104 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3200,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-aaa7fc86-266b-4e73-a735-2ead4ea1db5d STEP: Creating a pod to test consume secrets Mar 16 14:01:32.723: INFO: Waiting up to 5m0s for pod "pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf" in namespace "secrets-2881" to be "Succeeded or Failed" Mar 16 14:01:32.735: INFO: Pod "pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.472621ms Mar 16 14:01:34.739: INFO: Pod "pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015652697s Mar 16 14:01:36.742: INFO: Pod "pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019083824s STEP: Saw pod success Mar 16 14:01:36.742: INFO: Pod "pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf" satisfied condition "Succeeded or Failed" Mar 16 14:01:36.745: INFO: Trying to get logs from node latest-worker pod pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf container secret-env-test: STEP: delete the pod Mar 16 14:01:36.776: INFO: Waiting for pod pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf to disappear Mar 16 14:01:36.787: INFO: Pod pod-secrets-9b3b8f42-efff-43cf-8583-ea586566bfaf no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:36.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2881" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3215,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:36.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:40.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3264" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:40.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 16 14:01:41.004: INFO: Waiting up to 5m0s for pod "pod-a80063d3-fb29-445a-a653-0d71732d621c" in namespace "emptydir-2665" to be "Succeeded or Failed" Mar 16 14:01:41.045: INFO: Pod "pod-a80063d3-fb29-445a-a653-0d71732d621c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.397845ms Mar 16 14:01:43.049: INFO: Pod "pod-a80063d3-fb29-445a-a653-0d71732d621c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044623018s Mar 16 14:01:45.053: INFO: Pod "pod-a80063d3-fb29-445a-a653-0d71732d621c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048595053s STEP: Saw pod success Mar 16 14:01:45.053: INFO: Pod "pod-a80063d3-fb29-445a-a653-0d71732d621c" satisfied condition "Succeeded or Failed" Mar 16 14:01:45.056: INFO: Trying to get logs from node latest-worker pod pod-a80063d3-fb29-445a-a653-0d71732d621c container test-container: STEP: delete the pod Mar 16 14:01:45.114: INFO: Waiting for pod pod-a80063d3-fb29-445a-a653-0d71732d621c to disappear Mar 16 14:01:45.116: INFO: Pod pod-a80063d3-fb29-445a-a653-0d71732d621c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:45.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2665" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3250,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:45.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-655a836f-11be-4f8a-8049-a0fecb117a5a STEP: Creating a pod to test consume secrets Mar 16 14:01:45.180: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431" in namespace "projected-7054" to be "Succeeded or Failed" Mar 16 14:01:45.184: INFO: Pod "pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431": Phase="Pending", Reason="", readiness=false. Elapsed: 3.741914ms Mar 16 14:01:47.188: INFO: Pod "pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007499463s Mar 16 14:01:49.192: INFO: Pod "pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011684867s STEP: Saw pod success Mar 16 14:01:49.192: INFO: Pod "pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431" satisfied condition "Succeeded or Failed" Mar 16 14:01:49.195: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431 container projected-secret-volume-test: STEP: delete the pod Mar 16 14:01:49.228: INFO: Waiting for pod pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431 to disappear Mar 16 14:01:49.244: INFO: Pod pod-projected-secrets-7f90aa6e-9d73-4b0e-a948-3f2af9484431 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:49.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7054" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3259,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:49.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Mar 16 14:01:49.324: INFO: Waiting up to 5m0s for pod "var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c" in namespace "var-expansion-1771" to be "Succeeded or Failed" Mar 16 14:01:49.328: INFO: Pod "var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.805557ms Mar 16 14:01:51.331: INFO: Pod "var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007665866s Mar 16 14:01:53.335: INFO: Pod "var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011679751s STEP: Saw pod success Mar 16 14:01:53.335: INFO: Pod "var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c" satisfied condition "Succeeded or Failed" Mar 16 14:01:53.339: INFO: Trying to get logs from node latest-worker pod var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c container dapi-container: STEP: delete the pod Mar 16 14:01:53.371: INFO: Waiting for pod var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c to disappear Mar 16 14:01:53.382: INFO: Pod var-expansion-1a8d035a-3ed9-43bf-b94d-89f1e99aa18c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:53.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1771" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:01:53.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3997" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:01:53.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:02:53.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5023" for this suite. • [SLOW TEST:60.132 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3343,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:02:53.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 16 14:02:54.324: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 16 14:02:59.328: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:03:00.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1034" for this suite. • [SLOW TEST:6.680 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":204,"skipped":3363,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:03:00.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-a8651938-e0fd-40e8-ba49-1f74d5ae379f in namespace container-probe-2400 Mar 16 14:03:04.499: INFO: Started pod busybox-a8651938-e0fd-40e8-ba49-1f74d5ae379f in namespace container-probe-2400 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 14:03:04.502: INFO: Initial restart count of pod busybox-a8651938-e0fd-40e8-ba49-1f74d5ae379f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:07:05.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2400" for this suite. • [SLOW TEST:244.771 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3368,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:07:05.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-9751 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9751 to expose endpoints map[] Mar 16 14:07:05.237: INFO: Get endpoints failed (9.90326ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 16 14:07:06.241: INFO: successfully validated that service multi-endpoint-test in namespace services-9751 exposes endpoints map[] (1.013964553s elapsed) STEP: Creating pod pod1 in namespace services-9751 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9751 to expose endpoints map[pod1:[100]] Mar 16 14:07:09.321: INFO: successfully validated that service multi-endpoint-test in namespace services-9751 exposes endpoints map[pod1:[100]] (3.072652116s elapsed) STEP: Creating pod pod2 in namespace services-9751 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9751 to expose endpoints map[pod1:[100] pod2:[101]] Mar 16 14:07:12.376: INFO: successfully validated that service multi-endpoint-test in namespace services-9751 exposes endpoints map[pod1:[100] pod2:[101]] (3.050411141s elapsed) STEP: Deleting pod pod1 in namespace services-9751 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9751 to expose endpoints map[pod2:[101]] Mar 16 14:07:12.392: INFO: successfully validated that service multi-endpoint-test in namespace services-9751 exposes endpoints map[pod2:[101]] (11.858593ms elapsed) STEP: Deleting pod pod2 in namespace services-9751 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9751 to expose endpoints map[] Mar 16 14:07:13.458: INFO: successfully validated that service multi-endpoint-test in namespace services-9751 exposes endpoints map[] (1.062124327s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:07:13.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9751" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:8.419 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":206,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:07:13.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 16 14:07:13.609: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 14:07:13.632: INFO: Waiting for terminating namespaces to be deleted... Mar 16 14:07:13.635: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 16 14:07:13.655: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.655: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:07:13.655: INFO: pod1 from services-9751 started at 2020-03-16 14:07:06 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.655: INFO: Container pause ready: false, restart count 0 Mar 16 14:07:13.655: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.655: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:07:13.655: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 16 14:07:13.670: INFO: pod2 from services-9751 started at 2020-03-16 14:07:09 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.670: INFO: Container pause ready: true, restart count 0 Mar 16 14:07:13.670: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.670: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:07:13.670: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:07:13.670: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5e919d09-0d19-4a8e-8863-3fcf6be896f7 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-5e919d09-0d19-4a8e-8863-3fcf6be896f7 off the node latest-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5e919d09-0d19-4a8e-8863-3fcf6be896f7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:07:29.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6330" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:16.338 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":207,"skipped":3393,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:07:29.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9723 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9723 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9723 Mar 16 14:07:30.016: INFO: Found 0 stateful pods, waiting for 1 Mar 16 14:07:40.021: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 16 14:07:40.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 14:07:42.597: INFO: stderr: "I0316 14:07:42.475048 1949 log.go:172] (0xc0000e84d0) (0xc000548140) Create stream\nI0316 14:07:42.475093 1949 log.go:172] (0xc0000e84d0) (0xc000548140) Stream added, broadcasting: 1\nI0316 14:07:42.478292 1949 log.go:172] (0xc0000e84d0) Reply frame received for 1\nI0316 14:07:42.478334 1949 log.go:172] (0xc0000e84d0) (0xc0008760a0) Create stream\nI0316 14:07:42.478349 1949 log.go:172] (0xc0000e84d0) (0xc0008760a0) Stream added, broadcasting: 3\nI0316 14:07:42.479482 1949 log.go:172] (0xc0000e84d0) Reply frame received for 3\nI0316 14:07:42.479532 1949 log.go:172] (0xc0000e84d0) (0xc0005481e0) Create stream\nI0316 14:07:42.479546 1949 log.go:172] (0xc0000e84d0) (0xc0005481e0) Stream added, broadcasting: 5\nI0316 14:07:42.480544 1949 log.go:172] (0xc0000e84d0) Reply frame received for 5\nI0316 14:07:42.565068 1949 log.go:172] (0xc0000e84d0) Data frame received for 5\nI0316 14:07:42.565093 1949 log.go:172] (0xc0005481e0) (5) Data frame handling\nI0316 14:07:42.565223 1949 log.go:172] (0xc0005481e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 14:07:42.591139 1949 log.go:172] (0xc0000e84d0) Data frame received for 3\nI0316 14:07:42.591177 1949 log.go:172] (0xc0008760a0) (3) Data frame handling\nI0316 14:07:42.591215 1949 log.go:172] (0xc0008760a0) (3) Data frame sent\nI0316 14:07:42.591242 1949 log.go:172] (0xc0000e84d0) Data frame received for 3\nI0316 14:07:42.591261 1949 log.go:172] (0xc0008760a0) (3) Data frame handling\nI0316 14:07:42.591353 1949 log.go:172] (0xc0000e84d0) Data frame received for 5\nI0316 14:07:42.591368 1949 log.go:172] (0xc0005481e0) (5) Data frame handling\nI0316 14:07:42.593471 1949 log.go:172] (0xc0000e84d0) Data frame received for 1\nI0316 14:07:42.593509 1949 log.go:172] (0xc000548140) (1) Data frame handling\nI0316 14:07:42.593532 1949 log.go:172] (0xc000548140) (1) Data frame sent\nI0316 14:07:42.593555 1949 log.go:172] (0xc0000e84d0) (0xc000548140) Stream removed, broadcasting: 1\nI0316 14:07:42.593581 1949 log.go:172] (0xc0000e84d0) Go away received\nI0316 14:07:42.593823 1949 log.go:172] (0xc0000e84d0) (0xc000548140) Stream removed, broadcasting: 1\nI0316 14:07:42.593834 1949 log.go:172] (0xc0000e84d0) (0xc0008760a0) Stream removed, broadcasting: 3\nI0316 14:07:42.593840 1949 log.go:172] (0xc0000e84d0) (0xc0005481e0) Stream removed, broadcasting: 5\n" Mar 16 14:07:42.597: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 14:07:42.597: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 14:07:42.601: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 16 14:07:52.616: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:07:52.616: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:07:52.632: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999365s Mar 16 14:07:53.637: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992839617s Mar 16 14:07:54.641: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.988342372s Mar 16 14:07:55.646: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98394341s Mar 16 14:07:56.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.979419832s Mar 16 14:07:57.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973682138s Mar 16 14:07:58.660: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.969195633s Mar 16 14:07:59.664: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.965054047s Mar 16 14:08:00.668: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.960987544s Mar 16 14:08:01.672: INFO: Verifying statefulset ss doesn't scale past 1 for another 956.84183ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9723 Mar 16 14:08:02.676: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 14:08:02.905: INFO: stderr: "I0316 14:08:02.805908 1983 log.go:172] (0xc0009804d0) (0xc000697680) Create stream\nI0316 14:08:02.805979 1983 log.go:172] (0xc0009804d0) (0xc000697680) Stream added, broadcasting: 1\nI0316 14:08:02.811868 1983 log.go:172] (0xc0009804d0) Reply frame received for 1\nI0316 14:08:02.811917 1983 log.go:172] (0xc0009804d0) (0xc0009ac000) Create stream\nI0316 14:08:02.811930 1983 log.go:172] (0xc0009804d0) (0xc0009ac000) Stream added, broadcasting: 3\nI0316 14:08:02.813416 1983 log.go:172] (0xc0009804d0) Reply frame received for 3\nI0316 14:08:02.813447 1983 log.go:172] (0xc0009804d0) (0xc000697720) Create stream\nI0316 14:08:02.813457 1983 log.go:172] (0xc0009804d0) (0xc000697720) Stream added, broadcasting: 5\nI0316 14:08:02.814316 1983 log.go:172] (0xc0009804d0) Reply frame received for 5\nI0316 14:08:02.898686 1983 log.go:172] (0xc0009804d0) Data frame received for 5\nI0316 14:08:02.898722 1983 log.go:172] (0xc000697720) (5) Data frame handling\nI0316 14:08:02.898733 1983 log.go:172] (0xc000697720) (5) Data frame sent\nI0316 14:08:02.898742 1983 log.go:172] (0xc0009804d0) Data frame received for 5\nI0316 14:08:02.898750 1983 log.go:172] (0xc000697720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 14:08:02.898771 1983 log.go:172] (0xc0009804d0) Data frame received for 3\nI0316 14:08:02.898779 1983 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0316 14:08:02.898788 1983 log.go:172] (0xc0009ac000) (3) Data frame sent\nI0316 14:08:02.898796 1983 log.go:172] (0xc0009804d0) Data frame received for 3\nI0316 14:08:02.898804 1983 log.go:172] (0xc0009ac000) (3) Data frame handling\nI0316 14:08:02.900457 1983 log.go:172] (0xc0009804d0) Data frame received for 1\nI0316 14:08:02.900482 1983 log.go:172] (0xc000697680) (1) Data frame handling\nI0316 14:08:02.900498 1983 log.go:172] (0xc000697680) (1) Data frame sent\nI0316 14:08:02.900514 1983 log.go:172] (0xc0009804d0) (0xc000697680) Stream removed, broadcasting: 1\nI0316 14:08:02.900573 1983 log.go:172] (0xc0009804d0) Go away received\nI0316 14:08:02.900850 1983 log.go:172] (0xc0009804d0) (0xc000697680) Stream removed, broadcasting: 1\nI0316 14:08:02.900866 1983 log.go:172] (0xc0009804d0) (0xc0009ac000) Stream removed, broadcasting: 3\nI0316 14:08:02.900876 1983 log.go:172] (0xc0009804d0) (0xc000697720) Stream removed, broadcasting: 5\n" Mar 16 14:08:02.905: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 14:08:02.905: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 14:08:02.908: INFO: Found 1 stateful pods, waiting for 3 Mar 16 14:08:12.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:08:12.912: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:08:12.912: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 16 14:08:12.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 14:08:13.151: INFO: stderr: "I0316 14:08:13.056237 2006 log.go:172] (0xc00003a0b0) (0xc0006d5180) Create stream\nI0316 14:08:13.056296 2006 log.go:172] (0xc00003a0b0) (0xc0006d5180) Stream added, broadcasting: 1\nI0316 14:08:13.059419 2006 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0316 14:08:13.059462 2006 log.go:172] (0xc00003a0b0) (0xc0006d5360) Create stream\nI0316 14:08:13.059475 2006 log.go:172] (0xc00003a0b0) (0xc0006d5360) Stream added, broadcasting: 3\nI0316 14:08:13.060450 2006 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0316 14:08:13.060504 2006 log.go:172] (0xc00003a0b0) (0xc000580000) Create stream\nI0316 14:08:13.060525 2006 log.go:172] (0xc00003a0b0) (0xc000580000) Stream added, broadcasting: 5\nI0316 14:08:13.061625 2006 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0316 14:08:13.145591 2006 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0316 14:08:13.145636 2006 log.go:172] (0xc0006d5360) (3) Data frame handling\nI0316 14:08:13.145654 2006 log.go:172] (0xc0006d5360) (3) Data frame sent\nI0316 14:08:13.145684 2006 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0316 14:08:13.145697 2006 log.go:172] (0xc000580000) (5) Data frame handling\nI0316 14:08:13.145710 2006 log.go:172] (0xc000580000) (5) Data frame sent\nI0316 14:08:13.145722 2006 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0316 14:08:13.145733 2006 log.go:172] (0xc000580000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 14:08:13.145876 2006 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0316 14:08:13.145896 2006 log.go:172] (0xc0006d5360) (3) Data frame handling\nI0316 14:08:13.147786 2006 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0316 14:08:13.147810 2006 log.go:172] (0xc0006d5180) (1) Data frame handling\nI0316 14:08:13.147825 2006 log.go:172] (0xc0006d5180) (1) Data frame sent\nI0316 14:08:13.147839 2006 log.go:172] (0xc00003a0b0) (0xc0006d5180) Stream removed, broadcasting: 1\nI0316 14:08:13.147851 2006 log.go:172] (0xc00003a0b0) Go away received\nI0316 14:08:13.148174 2006 log.go:172] (0xc00003a0b0) (0xc0006d5180) Stream removed, broadcasting: 1\nI0316 14:08:13.148192 2006 log.go:172] (0xc00003a0b0) (0xc0006d5360) Stream removed, broadcasting: 3\nI0316 14:08:13.148204 2006 log.go:172] (0xc00003a0b0) (0xc000580000) Stream removed, broadcasting: 5\n" Mar 16 14:08:13.151: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 14:08:13.151: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 14:08:13.151: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 14:08:13.380: INFO: stderr: "I0316 14:08:13.271150 2027 log.go:172] (0xc000558a50) (0xc0007bb4a0) Create stream\nI0316 14:08:13.271205 2027 log.go:172] (0xc000558a50) (0xc0007bb4a0) Stream added, broadcasting: 1\nI0316 14:08:13.273677 2027 log.go:172] (0xc000558a50) Reply frame received for 1\nI0316 14:08:13.273726 2027 log.go:172] (0xc000558a50) (0xc00041c000) Create stream\nI0316 14:08:13.273741 2027 log.go:172] (0xc000558a50) (0xc00041c000) Stream added, broadcasting: 3\nI0316 14:08:13.274595 2027 log.go:172] (0xc000558a50) Reply frame received for 3\nI0316 14:08:13.274637 2027 log.go:172] (0xc000558a50) (0xc00063a000) Create stream\nI0316 14:08:13.274652 2027 log.go:172] (0xc000558a50) (0xc00063a000) Stream added, broadcasting: 5\nI0316 14:08:13.275398 2027 log.go:172] (0xc000558a50) Reply frame received for 5\nI0316 14:08:13.327720 2027 log.go:172] (0xc000558a50) Data frame received for 5\nI0316 14:08:13.327750 2027 log.go:172] (0xc00063a000) (5) Data frame handling\nI0316 14:08:13.327772 2027 log.go:172] (0xc00063a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 14:08:13.374178 2027 log.go:172] (0xc000558a50) Data frame received for 5\nI0316 14:08:13.374252 2027 log.go:172] (0xc00063a000) (5) Data frame handling\nI0316 14:08:13.374310 2027 log.go:172] (0xc000558a50) Data frame received for 3\nI0316 14:08:13.374343 2027 log.go:172] (0xc00041c000) (3) Data frame handling\nI0316 14:08:13.374367 2027 log.go:172] (0xc00041c000) (3) Data frame sent\nI0316 14:08:13.374391 2027 log.go:172] (0xc000558a50) Data frame received for 3\nI0316 14:08:13.374405 2027 log.go:172] (0xc00041c000) (3) Data frame handling\nI0316 14:08:13.376254 2027 log.go:172] (0xc000558a50) Data frame received for 1\nI0316 14:08:13.376289 2027 log.go:172] (0xc0007bb4a0) (1) Data frame handling\nI0316 14:08:13.376329 2027 log.go:172] (0xc0007bb4a0) (1) Data frame sent\nI0316 14:08:13.376363 2027 log.go:172] (0xc000558a50) (0xc0007bb4a0) Stream removed, broadcasting: 1\nI0316 14:08:13.376404 2027 log.go:172] (0xc000558a50) Go away received\nI0316 14:08:13.376921 2027 log.go:172] (0xc000558a50) (0xc0007bb4a0) Stream removed, broadcasting: 1\nI0316 14:08:13.376946 2027 log.go:172] (0xc000558a50) (0xc00041c000) Stream removed, broadcasting: 3\nI0316 14:08:13.376959 2027 log.go:172] (0xc000558a50) (0xc00063a000) Stream removed, broadcasting: 5\n" Mar 16 14:08:13.381: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 14:08:13.381: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 14:08:13.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 16 14:08:13.631: INFO: stderr: "I0316 14:08:13.515125 2047 log.go:172] (0xc00081c9a0) (0xc000689540) Create stream\nI0316 14:08:13.515184 2047 log.go:172] (0xc00081c9a0) (0xc000689540) Stream added, broadcasting: 1\nI0316 14:08:13.519799 2047 log.go:172] (0xc00081c9a0) Reply frame received for 1\nI0316 14:08:13.519848 2047 log.go:172] (0xc00081c9a0) (0xc0008e2000) Create stream\nI0316 14:08:13.519863 2047 log.go:172] (0xc00081c9a0) (0xc0008e2000) Stream added, broadcasting: 3\nI0316 14:08:13.521916 2047 log.go:172] (0xc00081c9a0) Reply frame received for 3\nI0316 14:08:13.521987 2047 log.go:172] (0xc00081c9a0) (0xc000352000) Create stream\nI0316 14:08:13.522009 2047 log.go:172] (0xc00081c9a0) (0xc000352000) Stream added, broadcasting: 5\nI0316 14:08:13.523004 2047 log.go:172] (0xc00081c9a0) Reply frame received for 5\nI0316 14:08:13.586020 2047 log.go:172] (0xc00081c9a0) Data frame received for 5\nI0316 14:08:13.586056 2047 log.go:172] (0xc000352000) (5) Data frame handling\nI0316 14:08:13.586070 2047 log.go:172] (0xc000352000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0316 14:08:13.625964 2047 log.go:172] (0xc00081c9a0) Data frame received for 3\nI0316 14:08:13.625992 2047 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0316 14:08:13.626025 2047 log.go:172] (0xc0008e2000) (3) Data frame sent\nI0316 14:08:13.626451 2047 log.go:172] (0xc00081c9a0) Data frame received for 5\nI0316 14:08:13.626488 2047 log.go:172] (0xc000352000) (5) Data frame handling\nI0316 14:08:13.626514 2047 log.go:172] (0xc00081c9a0) Data frame received for 3\nI0316 14:08:13.626525 2047 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0316 14:08:13.627914 2047 log.go:172] (0xc00081c9a0) Data frame received for 1\nI0316 14:08:13.627940 2047 log.go:172] (0xc000689540) (1) Data frame handling\nI0316 14:08:13.627961 2047 log.go:172] (0xc000689540) (1) Data frame sent\nI0316 14:08:13.627983 2047 log.go:172] (0xc00081c9a0) (0xc000689540) Stream removed, broadcasting: 1\nI0316 14:08:13.628012 2047 log.go:172] (0xc00081c9a0) Go away received\nI0316 14:08:13.628295 2047 log.go:172] (0xc00081c9a0) (0xc000689540) Stream removed, broadcasting: 1\nI0316 14:08:13.628308 2047 log.go:172] (0xc00081c9a0) (0xc0008e2000) Stream removed, broadcasting: 3\nI0316 14:08:13.628314 2047 log.go:172] (0xc00081c9a0) (0xc000352000) Stream removed, broadcasting: 5\n" Mar 16 14:08:13.631: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 16 14:08:13.631: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 16 14:08:13.631: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:08:13.634: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Mar 16 14:08:23.642: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:08:23.642: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:08:23.642: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 16 14:08:23.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999955s Mar 16 14:08:24.658: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996255291s Mar 16 14:08:25.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991139566s Mar 16 14:08:26.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986162282s Mar 16 14:08:27.672: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.982116233s Mar 16 14:08:28.677: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97669568s Mar 16 14:08:29.682: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.972030224s Mar 16 14:08:30.687: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.967031561s Mar 16 14:08:31.691: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.962040475s Mar 16 14:08:32.696: INFO: Verifying statefulset ss doesn't scale past 3 for another 957.429887ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9723 Mar 16 14:08:33.702: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 14:08:33.917: INFO: stderr: "I0316 14:08:33.827738 2068 log.go:172] (0xc00003afd0) (0xc0006b94a0) Create stream\nI0316 14:08:33.827787 2068 log.go:172] (0xc00003afd0) (0xc0006b94a0) Stream added, broadcasting: 1\nI0316 14:08:33.829931 2068 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0316 14:08:33.829969 2068 log.go:172] (0xc00003afd0) (0xc000ac6000) Create stream\nI0316 14:08:33.829987 2068 log.go:172] (0xc00003afd0) (0xc000ac6000) Stream added, broadcasting: 3\nI0316 14:08:33.830784 2068 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0316 14:08:33.830822 2068 log.go:172] (0xc00003afd0) (0xc000a62000) Create stream\nI0316 14:08:33.830830 2068 log.go:172] (0xc00003afd0) (0xc000a62000) Stream added, broadcasting: 5\nI0316 14:08:33.831572 2068 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0316 14:08:33.910443 2068 log.go:172] (0xc00003afd0) Data frame received for 3\nI0316 14:08:33.910485 2068 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0316 14:08:33.910512 2068 log.go:172] (0xc00003afd0) Data frame received for 5\nI0316 14:08:33.910545 2068 log.go:172] (0xc000a62000) (5) Data frame handling\nI0316 14:08:33.910568 2068 log.go:172] (0xc000a62000) (5) Data frame sent\nI0316 14:08:33.910585 2068 log.go:172] (0xc00003afd0) Data frame received for 5\nI0316 14:08:33.910601 2068 log.go:172] (0xc000a62000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 14:08:33.910622 2068 log.go:172] (0xc000ac6000) (3) Data frame sent\nI0316 14:08:33.910639 2068 log.go:172] (0xc00003afd0) Data frame received for 3\nI0316 14:08:33.910656 2068 log.go:172] (0xc000ac6000) (3) Data frame handling\nI0316 14:08:33.912300 2068 log.go:172] (0xc00003afd0) Data frame received for 1\nI0316 14:08:33.912325 2068 log.go:172] (0xc0006b94a0) (1) Data frame handling\nI0316 14:08:33.912360 2068 log.go:172] (0xc0006b94a0) (1) Data frame sent\nI0316 14:08:33.912380 2068 log.go:172] (0xc00003afd0) (0xc0006b94a0) Stream removed, broadcasting: 1\nI0316 14:08:33.912394 2068 log.go:172] (0xc00003afd0) Go away received\nI0316 14:08:33.912783 2068 log.go:172] (0xc00003afd0) (0xc0006b94a0) Stream removed, broadcasting: 1\nI0316 14:08:33.912804 2068 log.go:172] (0xc00003afd0) (0xc000ac6000) Stream removed, broadcasting: 3\nI0316 14:08:33.912815 2068 log.go:172] (0xc00003afd0) (0xc000a62000) Stream removed, broadcasting: 5\n" Mar 16 14:08:33.917: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 14:08:33.917: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 14:08:33.917: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 14:08:34.114: INFO: stderr: "I0316 14:08:34.036786 2091 log.go:172] (0xc0000e9080) (0xc000637680) Create stream\nI0316 14:08:34.036840 2091 log.go:172] (0xc0000e9080) (0xc000637680) Stream added, broadcasting: 1\nI0316 14:08:34.039400 2091 log.go:172] (0xc0000e9080) Reply frame received for 1\nI0316 14:08:34.039436 2091 log.go:172] (0xc0000e9080) (0xc00053b680) Create stream\nI0316 14:08:34.039445 2091 log.go:172] (0xc0000e9080) (0xc00053b680) Stream added, broadcasting: 3\nI0316 14:08:34.040507 2091 log.go:172] (0xc0000e9080) Reply frame received for 3\nI0316 14:08:34.040548 2091 log.go:172] (0xc0000e9080) (0xc000637720) Create stream\nI0316 14:08:34.040562 2091 log.go:172] (0xc0000e9080) (0xc000637720) Stream added, broadcasting: 5\nI0316 14:08:34.041590 2091 log.go:172] (0xc0000e9080) Reply frame received for 5\nI0316 14:08:34.107505 2091 log.go:172] (0xc0000e9080) Data frame received for 3\nI0316 14:08:34.107536 2091 log.go:172] (0xc00053b680) (3) Data frame handling\nI0316 14:08:34.107561 2091 log.go:172] (0xc00053b680) (3) Data frame sent\nI0316 14:08:34.107573 2091 log.go:172] (0xc0000e9080) Data frame received for 3\nI0316 14:08:34.107585 2091 log.go:172] (0xc00053b680) (3) Data frame handling\nI0316 14:08:34.107651 2091 log.go:172] (0xc0000e9080) Data frame received for 5\nI0316 14:08:34.107686 2091 log.go:172] (0xc000637720) (5) Data frame handling\nI0316 14:08:34.107716 2091 log.go:172] (0xc000637720) (5) Data frame sent\nI0316 14:08:34.107740 2091 log.go:172] (0xc0000e9080) Data frame received for 5\nI0316 14:08:34.107763 2091 log.go:172] (0xc000637720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 14:08:34.109690 2091 log.go:172] (0xc0000e9080) Data frame received for 1\nI0316 14:08:34.109718 2091 log.go:172] (0xc000637680) (1) Data frame handling\nI0316 14:08:34.109733 2091 log.go:172] (0xc000637680) (1) Data frame sent\nI0316 14:08:34.109770 2091 log.go:172] (0xc0000e9080) (0xc000637680) Stream removed, broadcasting: 1\nI0316 14:08:34.109799 2091 log.go:172] (0xc0000e9080) Go away received\nI0316 14:08:34.110242 2091 log.go:172] (0xc0000e9080) (0xc000637680) Stream removed, broadcasting: 1\nI0316 14:08:34.110268 2091 log.go:172] (0xc0000e9080) (0xc00053b680) Stream removed, broadcasting: 3\nI0316 14:08:34.110283 2091 log.go:172] (0xc0000e9080) (0xc000637720) Stream removed, broadcasting: 5\n" Mar 16 14:08:34.114: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 14:08:34.114: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 14:08:34.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9723 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 16 14:08:34.323: INFO: stderr: "I0316 14:08:34.257627 2112 log.go:172] (0xc0009b4630) (0xc000604000) Create stream\nI0316 14:08:34.257673 2112 log.go:172] (0xc0009b4630) (0xc000604000) Stream added, broadcasting: 1\nI0316 14:08:34.259940 2112 log.go:172] (0xc0009b4630) Reply frame received for 1\nI0316 14:08:34.259989 2112 log.go:172] (0xc0009b4630) (0xc0007db360) Create stream\nI0316 14:08:34.260006 2112 log.go:172] (0xc0009b4630) (0xc0007db360) Stream added, broadcasting: 3\nI0316 14:08:34.261062 2112 log.go:172] (0xc0009b4630) Reply frame received for 3\nI0316 14:08:34.261098 2112 log.go:172] (0xc0009b4630) (0xc000604140) Create stream\nI0316 14:08:34.261256 2112 log.go:172] (0xc0009b4630) (0xc000604140) Stream added, broadcasting: 5\nI0316 14:08:34.262078 2112 log.go:172] (0xc0009b4630) Reply frame received for 5\nI0316 14:08:34.316580 2112 log.go:172] (0xc0009b4630) Data frame received for 5\nI0316 14:08:34.316608 2112 log.go:172] (0xc000604140) (5) Data frame handling\nI0316 14:08:34.316628 2112 log.go:172] (0xc000604140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0316 14:08:34.317951 2112 log.go:172] (0xc0009b4630) Data frame received for 3\nI0316 14:08:34.317978 2112 log.go:172] (0xc0007db360) (3) Data frame handling\nI0316 14:08:34.317994 2112 log.go:172] (0xc0007db360) (3) Data frame sent\nI0316 14:08:34.318002 2112 log.go:172] (0xc0009b4630) Data frame received for 3\nI0316 14:08:34.318009 2112 log.go:172] (0xc0007db360) (3) Data frame handling\nI0316 14:08:34.318100 2112 log.go:172] (0xc0009b4630) Data frame received for 5\nI0316 14:08:34.318114 2112 log.go:172] (0xc000604140) (5) Data frame handling\nI0316 14:08:34.319468 2112 log.go:172] (0xc0009b4630) Data frame received for 1\nI0316 14:08:34.319513 2112 log.go:172] (0xc000604000) (1) Data frame handling\nI0316 14:08:34.319552 2112 log.go:172] (0xc000604000) (1) Data frame sent\nI0316 14:08:34.319568 2112 log.go:172] (0xc0009b4630) (0xc000604000) Stream removed, broadcasting: 1\nI0316 14:08:34.319593 2112 log.go:172] (0xc0009b4630) Go away received\nI0316 14:08:34.319872 2112 log.go:172] (0xc0009b4630) (0xc000604000) Stream removed, broadcasting: 1\nI0316 14:08:34.319885 2112 log.go:172] (0xc0009b4630) (0xc0007db360) Stream removed, broadcasting: 3\nI0316 14:08:34.319891 2112 log.go:172] (0xc0009b4630) (0xc000604140) Stream removed, broadcasting: 5\n" Mar 16 14:08:34.323: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 16 14:08:34.323: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 16 14:08:34.323: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 14:08:54.362: INFO: Deleting all statefulset in ns statefulset-9723 Mar 16 14:08:54.365: INFO: Scaling statefulset ss to 0 Mar 16 14:08:54.375: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:08:54.377: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:08:54.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9723" for this suite. • [SLOW TEST:84.496 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":208,"skipped":3410,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:08:54.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:08:54.456: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a" in namespace "downward-api-9385" to be "Succeeded or Failed" Mar 16 14:08:54.521: INFO: Pod "downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a": Phase="Pending", Reason="", readiness=false. Elapsed: 65.628664ms Mar 16 14:08:56.526: INFO: Pod "downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07003229s Mar 16 14:08:58.529: INFO: Pod "downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073882346s STEP: Saw pod success Mar 16 14:08:58.530: INFO: Pod "downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a" satisfied condition "Succeeded or Failed" Mar 16 14:08:58.533: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a container client-container: STEP: delete the pod Mar 16 14:08:58.563: INFO: Waiting for pod downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a to disappear Mar 16 14:08:58.567: INFO: Pod downwardapi-volume-fa42131c-72d9-49a3-8db2-71732498ca5a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:08:58.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9385" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:08:58.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:08:58.624: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 16 14:09:01.542: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 create -f -' Mar 16 14:09:05.984: INFO: stderr: "" Mar 16 14:09:05.984: INFO: stdout: "e2e-test-crd-publish-openapi-9765-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 16 14:09:05.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 delete e2e-test-crd-publish-openapi-9765-crds test-foo' Mar 16 14:09:06.089: INFO: stderr: "" Mar 16 14:09:06.089: INFO: stdout: "e2e-test-crd-publish-openapi-9765-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 16 14:09:06.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 apply -f -' Mar 16 14:09:06.331: INFO: stderr: "" Mar 16 14:09:06.331: INFO: stdout: "e2e-test-crd-publish-openapi-9765-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 16 14:09:06.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 delete e2e-test-crd-publish-openapi-9765-crds test-foo' Mar 16 14:09:06.490: INFO: stderr: "" Mar 16 14:09:06.490: INFO: stdout: "e2e-test-crd-publish-openapi-9765-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 16 14:09:06.490: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 create -f -' Mar 16 14:09:06.781: INFO: rc: 1 Mar 16 14:09:06.782: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 apply -f -' Mar 16 14:09:07.021: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 16 14:09:07.021: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 create -f -' Mar 16 14:09:07.236: INFO: rc: 1 Mar 16 14:09:07.236: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4981 apply -f -' Mar 16 14:09:07.470: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 16 14:09:07.470: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9765-crds' Mar 16 14:09:07.724: INFO: stderr: "" Mar 16 14:09:07.724: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9765-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 16 14:09:07.724: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9765-crds.metadata' Mar 16 14:09:07.952: INFO: stderr: "" Mar 16 14:09:07.952: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9765-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 16 14:09:07.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9765-crds.spec' Mar 16 14:09:08.181: INFO: stderr: "" Mar 16 14:09:08.181: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9765-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 16 14:09:08.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9765-crds.spec.bars' Mar 16 14:09:08.431: INFO: stderr: "" Mar 16 14:09:08.431: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9765-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 16 14:09:08.431: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9765-crds.spec.bars2' Mar 16 14:09:08.657: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:11.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4981" for this suite. • [SLOW TEST:12.991 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":210,"skipped":3517,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:11.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:09:11.631: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c" in namespace "projected-1025" to be "Succeeded or Failed" Mar 16 14:09:11.671: INFO: Pod "downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.259901ms Mar 16 14:09:13.689: INFO: Pod "downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058461682s Mar 16 14:09:15.695: INFO: Pod "downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064281697s STEP: Saw pod success Mar 16 14:09:15.695: INFO: Pod "downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c" satisfied condition "Succeeded or Failed" Mar 16 14:09:15.698: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c container client-container: STEP: delete the pod Mar 16 14:09:15.871: INFO: Waiting for pod downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c to disappear Mar 16 14:09:15.933: INFO: Pod downwardapi-volume-b872b90f-c8af-426d-8705-a2e0dcca8d5c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:15.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1025" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3534,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:15.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:09:16.160: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 16 14:09:16.166: INFO: Number of nodes with available pods: 0 Mar 16 14:09:16.166: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 16 14:09:16.207: INFO: Number of nodes with available pods: 0 Mar 16 14:09:16.207: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:17.211: INFO: Number of nodes with available pods: 0 Mar 16 14:09:17.211: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:18.211: INFO: Number of nodes with available pods: 0 Mar 16 14:09:18.211: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:19.212: INFO: Number of nodes with available pods: 1 Mar 16 14:09:19.212: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 16 14:09:19.240: INFO: Number of nodes with available pods: 1 Mar 16 14:09:19.240: INFO: Number of running nodes: 0, number of available pods: 1 Mar 16 14:09:20.245: INFO: Number of nodes with available pods: 0 Mar 16 14:09:20.245: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 16 14:09:20.273: INFO: Number of nodes with available pods: 0 Mar 16 14:09:20.273: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:21.899: INFO: Number of nodes with available pods: 0 Mar 16 14:09:21.899: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:22.276: INFO: Number of nodes with available pods: 0 Mar 16 14:09:22.276: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:23.278: INFO: Number of nodes with available pods: 0 Mar 16 14:09:23.278: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:24.277: INFO: Number of nodes with available pods: 0 Mar 16 14:09:24.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:25.277: INFO: Number of nodes with available pods: 0 Mar 16 14:09:25.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:26.277: INFO: Number of nodes with available pods: 0 Mar 16 14:09:26.277: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:09:27.277: INFO: Number of nodes with available pods: 1 Mar 16 14:09:27.278: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8119, will wait for the garbage collector to delete the pods Mar 16 14:09:27.342: INFO: Deleting DaemonSet.extensions daemon-set took: 5.873812ms Mar 16 14:09:27.642: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.277215ms Mar 16 14:09:33.045: INFO: Number of nodes with available pods: 0 Mar 16 14:09:33.045: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 14:09:33.048: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8119/daemonsets","resourceVersion":"288741"},"items":null} Mar 16 14:09:33.050: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8119/pods","resourceVersion":"288741"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:33.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8119" for this suite. • [SLOW TEST:17.141 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":212,"skipped":3549,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:33.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Mar 16 14:09:33.128: INFO: namespace kubectl-2595 Mar 16 14:09:33.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2595' Mar 16 14:09:33.468: INFO: stderr: "" Mar 16 14:09:33.468: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 16 14:09:34.533: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:34.533: INFO: Found 0 / 1 Mar 16 14:09:35.472: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:35.472: INFO: Found 0 / 1 Mar 16 14:09:36.472: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:36.472: INFO: Found 0 / 1 Mar 16 14:09:37.472: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:37.472: INFO: Found 0 / 1 Mar 16 14:09:38.472: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:38.472: INFO: Found 1 / 1 Mar 16 14:09:38.472: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 16 14:09:38.476: INFO: Selector matched 1 pods for map[app:agnhost] Mar 16 14:09:38.476: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 16 14:09:38.476: INFO: wait on agnhost-master startup in kubectl-2595 Mar 16 14:09:38.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-9xw8z agnhost-master --namespace=kubectl-2595' Mar 16 14:09:38.582: INFO: stderr: "" Mar 16 14:09:38.582: INFO: stdout: "Paused\n" STEP: exposing RC Mar 16 14:09:38.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2595' Mar 16 14:09:38.708: INFO: stderr: "" Mar 16 14:09:38.708: INFO: stdout: "service/rm2 exposed\n" Mar 16 14:09:38.719: INFO: Service rm2 in namespace kubectl-2595 found. STEP: exposing service Mar 16 14:09:40.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2595' Mar 16 14:09:40.857: INFO: stderr: "" Mar 16 14:09:40.857: INFO: stdout: "service/rm3 exposed\n" Mar 16 14:09:40.868: INFO: Service rm3 in namespace kubectl-2595 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:42.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2595" for this suite. • [SLOW TEST:9.801 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":213,"skipped":3557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:42.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-df024700-3304-43e5-bc2b-5845f007d74d STEP: Creating a pod to test consume secrets Mar 16 14:09:43.013: INFO: Waiting up to 5m0s for pod "pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29" in namespace "secrets-2309" to be "Succeeded or Failed" Mar 16 14:09:43.018: INFO: Pod "pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29": Phase="Pending", Reason="", readiness=false. Elapsed: 5.048994ms Mar 16 14:09:45.087: INFO: Pod "pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07452255s Mar 16 14:09:47.091: INFO: Pod "pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078326278s STEP: Saw pod success Mar 16 14:09:47.091: INFO: Pod "pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29" satisfied condition "Succeeded or Failed" Mar 16 14:09:47.093: INFO: Trying to get logs from node latest-worker pod pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29 container secret-volume-test: STEP: delete the pod Mar 16 14:09:47.164: INFO: Waiting for pod pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29 to disappear Mar 16 14:09:47.169: INFO: Pod pod-secrets-2d00687e-ba60-4e0a-b25a-f0d1c9d92e29 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:47.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2309" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":214,"skipped":3581,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:47.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 14:09:48.047: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 14:09:50.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 16 14:09:52.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719964588, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 14:09:55.086: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:09:55.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-498-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:09:56.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3403" for this suite. STEP: Destroying namespace "webhook-3403-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.219 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":215,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:09:56.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 16 14:09:56.499: INFO: >>> kubeConfig: /root/.kube/config Mar 16 14:09:59.386: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:10:08.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2512" for this suite. • [SLOW TEST:12.472 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":216,"skipped":3639,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:10:08.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 16 14:10:08.964: INFO: Waiting up to 5m0s for pod "downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36" in namespace "downward-api-5752" to be "Succeeded or Failed" Mar 16 14:10:08.985: INFO: Pod "downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36": Phase="Pending", Reason="", readiness=false. Elapsed: 20.485509ms Mar 16 14:10:10.988: INFO: Pod "downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023807941s Mar 16 14:10:12.992: INFO: Pod "downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028003295s STEP: Saw pod success Mar 16 14:10:12.993: INFO: Pod "downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36" satisfied condition "Succeeded or Failed" Mar 16 14:10:12.996: INFO: Trying to get logs from node latest-worker2 pod downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36 container dapi-container: STEP: delete the pod Mar 16 14:10:13.023: INFO: Waiting for pod downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36 to disappear Mar 16 14:10:13.030: INFO: Pod downward-api-b17e1f93-3d8a-4fe7-b2f2-d25a33d73b36 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:10:13.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5752" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3656,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:10:13.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Mar 16 14:10:13.096: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8825' Mar 16 14:10:13.376: INFO: stderr: "" Mar 16 14:10:13.376: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 14:10:13.376: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:13.503: INFO: stderr: "" Mar 16 14:10:13.503: INFO: stdout: "update-demo-nautilus-9c7pc update-demo-nautilus-9sbsx " Mar 16 14:10:13.503: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9c7pc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:13.593: INFO: stderr: "" Mar 16 14:10:13.593: INFO: stdout: "" Mar 16 14:10:13.593: INFO: update-demo-nautilus-9c7pc is created but not running Mar 16 14:10:18.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:18.689: INFO: stderr: "" Mar 16 14:10:18.689: INFO: stdout: "update-demo-nautilus-9c7pc update-demo-nautilus-9sbsx " Mar 16 14:10:18.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9c7pc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:18.787: INFO: stderr: "" Mar 16 14:10:18.787: INFO: stdout: "true" Mar 16 14:10:18.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9c7pc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:18.880: INFO: stderr: "" Mar 16 14:10:18.880: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:18.880: INFO: validating pod update-demo-nautilus-9c7pc Mar 16 14:10:18.884: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:18.884: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:18.884: INFO: update-demo-nautilus-9c7pc is verified up and running Mar 16 14:10:18.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:18.977: INFO: stderr: "" Mar 16 14:10:18.977: INFO: stdout: "true" Mar 16 14:10:18.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:19.071: INFO: stderr: "" Mar 16 14:10:19.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:19.071: INFO: validating pod update-demo-nautilus-9sbsx Mar 16 14:10:19.075: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:19.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:19.075: INFO: update-demo-nautilus-9sbsx is verified up and running STEP: scaling down the replication controller Mar 16 14:10:19.080: INFO: scanned /root for discovery docs: Mar 16 14:10:19.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8825' Mar 16 14:10:20.199: INFO: stderr: "" Mar 16 14:10:20.199: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 14:10:20.199: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:20.295: INFO: stderr: "" Mar 16 14:10:20.295: INFO: stdout: "update-demo-nautilus-9c7pc update-demo-nautilus-9sbsx " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 14:10:25.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:25.395: INFO: stderr: "" Mar 16 14:10:25.395: INFO: stdout: "update-demo-nautilus-9c7pc update-demo-nautilus-9sbsx " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 14:10:30.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:30.492: INFO: stderr: "" Mar 16 14:10:30.492: INFO: stdout: "update-demo-nautilus-9c7pc update-demo-nautilus-9sbsx " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 16 14:10:35.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:35.587: INFO: stderr: "" Mar 16 14:10:35.587: INFO: stdout: "update-demo-nautilus-9sbsx " Mar 16 14:10:35.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:35.674: INFO: stderr: "" Mar 16 14:10:35.674: INFO: stdout: "true" Mar 16 14:10:35.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:35.765: INFO: stderr: "" Mar 16 14:10:35.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:35.765: INFO: validating pod update-demo-nautilus-9sbsx Mar 16 14:10:35.769: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:35.769: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:35.769: INFO: update-demo-nautilus-9sbsx is verified up and running STEP: scaling up the replication controller Mar 16 14:10:35.774: INFO: scanned /root for discovery docs: Mar 16 14:10:35.774: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8825' Mar 16 14:10:36.886: INFO: stderr: "" Mar 16 14:10:36.886: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 16 14:10:36.887: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:36.983: INFO: stderr: "" Mar 16 14:10:36.983: INFO: stdout: "update-demo-nautilus-9sbsx update-demo-nautilus-gs4v6 " Mar 16 14:10:36.983: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:37.078: INFO: stderr: "" Mar 16 14:10:37.078: INFO: stdout: "true" Mar 16 14:10:37.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:37.166: INFO: stderr: "" Mar 16 14:10:37.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:37.166: INFO: validating pod update-demo-nautilus-9sbsx Mar 16 14:10:37.169: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:37.169: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:37.169: INFO: update-demo-nautilus-9sbsx is verified up and running Mar 16 14:10:37.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gs4v6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:37.264: INFO: stderr: "" Mar 16 14:10:37.264: INFO: stdout: "" Mar 16 14:10:37.264: INFO: update-demo-nautilus-gs4v6 is created but not running Mar 16 14:10:42.264: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8825' Mar 16 14:10:42.363: INFO: stderr: "" Mar 16 14:10:42.363: INFO: stdout: "update-demo-nautilus-9sbsx update-demo-nautilus-gs4v6 " Mar 16 14:10:42.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:42.460: INFO: stderr: "" Mar 16 14:10:42.460: INFO: stdout: "true" Mar 16 14:10:42.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9sbsx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:42.556: INFO: stderr: "" Mar 16 14:10:42.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:42.556: INFO: validating pod update-demo-nautilus-9sbsx Mar 16 14:10:42.559: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:42.559: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:42.559: INFO: update-demo-nautilus-9sbsx is verified up and running Mar 16 14:10:42.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gs4v6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:42.650: INFO: stderr: "" Mar 16 14:10:42.650: INFO: stdout: "true" Mar 16 14:10:42.650: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gs4v6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8825' Mar 16 14:10:42.741: INFO: stderr: "" Mar 16 14:10:42.741: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 16 14:10:42.741: INFO: validating pod update-demo-nautilus-gs4v6 Mar 16 14:10:42.745: INFO: got data: { "image": "nautilus.jpg" } Mar 16 14:10:42.745: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 16 14:10:42.745: INFO: update-demo-nautilus-gs4v6 is verified up and running STEP: using delete to clean up resources Mar 16 14:10:42.745: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8825' Mar 16 14:10:42.873: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 16 14:10:42.873: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 16 14:10:42.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8825' Mar 16 14:10:42.973: INFO: stderr: "No resources found in kubectl-8825 namespace.\n" Mar 16 14:10:42.973: INFO: stdout: "" Mar 16 14:10:42.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8825 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 14:10:43.067: INFO: stderr: "" Mar 16 14:10:43.067: INFO: stdout: "update-demo-nautilus-9sbsx\nupdate-demo-nautilus-gs4v6\n" Mar 16 14:10:43.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8825' Mar 16 14:10:43.666: INFO: stderr: "No resources found in kubectl-8825 namespace.\n" Mar 16 14:10:43.666: INFO: stdout: "" Mar 16 14:10:43.666: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8825 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 16 14:10:43.761: INFO: stderr: "" Mar 16 14:10:43.761: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:10:43.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8825" for this suite. • [SLOW TEST:30.729 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":218,"skipped":3682,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:10:43.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 16 14:10:43.945: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:43.949: INFO: Number of nodes with available pods: 0 Mar 16 14:10:43.949: INFO: Node latest-worker is running more than one daemon pod Mar 16 14:10:44.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:44.959: INFO: Number of nodes with available pods: 0 Mar 16 14:10:44.959: INFO: Node latest-worker is running more than one daemon pod Mar 16 14:10:45.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:45.958: INFO: Number of nodes with available pods: 0 Mar 16 14:10:45.958: INFO: Node latest-worker is running more than one daemon pod Mar 16 14:10:46.959: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:46.962: INFO: Number of nodes with available pods: 1 Mar 16 14:10:46.962: INFO: Node latest-worker2 is running more than one daemon pod Mar 16 14:10:47.954: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:47.957: INFO: Number of nodes with available pods: 2 Mar 16 14:10:47.957: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 16 14:10:47.983: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 16 14:10:47.995: INFO: Number of nodes with available pods: 2 Mar 16 14:10:47.995: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3779, will wait for the garbage collector to delete the pods Mar 16 14:10:49.277: INFO: Deleting DaemonSet.extensions daemon-set took: 27.081589ms Mar 16 14:10:49.377: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.39004ms Mar 16 14:11:02.780: INFO: Number of nodes with available pods: 0 Mar 16 14:11:02.780: INFO: Number of running nodes: 0, number of available pods: 0 Mar 16 14:11:02.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3779/daemonsets","resourceVersion":"289352"},"items":null} Mar 16 14:11:02.786: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3779/pods","resourceVersion":"289352"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:02.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3779" for this suite. • [SLOW TEST:19.035 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":219,"skipped":3685,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:02.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:11:02.965: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763" in namespace "downward-api-4041" to be "Succeeded or Failed" Mar 16 14:11:03.086: INFO: Pod "downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763": Phase="Pending", Reason="", readiness=false. Elapsed: 121.434708ms Mar 16 14:11:05.090: INFO: Pod "downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125502133s Mar 16 14:11:07.100: INFO: Pod "downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135613126s STEP: Saw pod success Mar 16 14:11:07.100: INFO: Pod "downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763" satisfied condition "Succeeded or Failed" Mar 16 14:11:07.103: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763 container client-container: STEP: delete the pod Mar 16 14:11:07.168: INFO: Waiting for pod downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763 to disappear Mar 16 14:11:07.178: INFO: Pod downwardapi-volume-ab34a91c-12f0-4eb5-a976-33afd19da763 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:07.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4041" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3690,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:07.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:11:07.248: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:07.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2345" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":221,"skipped":3705,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:07.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:11:07.942: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 16 14:11:09.057: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:10.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4047" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":222,"skipped":3710,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:10.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-s5n5 STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:11:10.489: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-s5n5" in namespace "subpath-9105" to be "Succeeded or Failed" Mar 16 14:11:10.542: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Pending", Reason="", readiness=false. Elapsed: 53.168332ms Mar 16 14:11:12.546: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056890993s Mar 16 14:11:14.550: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 4.061127838s Mar 16 14:11:16.555: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 6.065343413s Mar 16 14:11:18.559: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 8.069409194s Mar 16 14:11:20.563: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 10.073582593s Mar 16 14:11:22.573: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 12.084248896s Mar 16 14:11:24.578: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 14.088574334s Mar 16 14:11:26.582: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 16.092467238s Mar 16 14:11:28.586: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 18.096715733s Mar 16 14:11:30.592: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 20.103161932s Mar 16 14:11:32.596: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Running", Reason="", readiness=true. Elapsed: 22.106764249s Mar 16 14:11:34.600: INFO: Pod "pod-subpath-test-configmap-s5n5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.110701768s STEP: Saw pod success Mar 16 14:11:34.600: INFO: Pod "pod-subpath-test-configmap-s5n5" satisfied condition "Succeeded or Failed" Mar 16 14:11:34.603: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-s5n5 container test-container-subpath-configmap-s5n5: STEP: delete the pod Mar 16 14:11:34.620: INFO: Waiting for pod pod-subpath-test-configmap-s5n5 to disappear Mar 16 14:11:34.631: INFO: Pod pod-subpath-test-configmap-s5n5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-s5n5 Mar 16 14:11:34.631: INFO: Deleting pod "pod-subpath-test-configmap-s5n5" in namespace "subpath-9105" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:34.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9105" for this suite. • [SLOW TEST:24.512 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":223,"skipped":3710,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:34.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Mar 16 14:11:39.267: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9862 pod-service-account-b16fc477-315f-405d-a925-869a2ac1dd05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 16 14:11:39.492: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9862 pod-service-account-b16fc477-315f-405d-a925-869a2ac1dd05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 16 14:11:39.709: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9862 pod-service-account-b16fc477-315f-405d-a925-869a2ac1dd05 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:39.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9862" for this suite. • [SLOW TEST:5.263 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":224,"skipped":3728,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:39.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-d99b086b-77f2-4367-8541-41457aede24c STEP: Creating a pod to test consume configMaps Mar 16 14:11:39.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf" in namespace "configmap-3944" to be "Succeeded or Failed" Mar 16 14:11:40.017: INFO: Pod "pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.835381ms Mar 16 14:11:42.022: INFO: Pod "pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025297953s Mar 16 14:11:44.026: INFO: Pod "pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029685505s STEP: Saw pod success Mar 16 14:11:44.026: INFO: Pod "pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf" satisfied condition "Succeeded or Failed" Mar 16 14:11:44.030: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf container configmap-volume-test: STEP: delete the pod Mar 16 14:11:44.052: INFO: Waiting for pod pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf to disappear Mar 16 14:11:44.056: INFO: Pod pod-configmaps-920dbe6b-2d43-4452-a27a-5018d1874ecf no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:11:44.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3944" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3785,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:11:44.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7027 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a new StatefulSet Mar 16 14:11:44.153: INFO: Found 0 stateful pods, waiting for 3 Mar 16 14:11:54.212: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:11:54.213: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:11:54.213: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Mar 16 14:12:04.158: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:12:04.158: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:12:04.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 16 14:12:04.183: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 16 14:12:14.232: INFO: Updating stateful set ss2 Mar 16 14:12:14.549: INFO: Waiting for Pod statefulset-7027/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 16 14:12:25.250: INFO: Found 2 stateful pods, waiting for 3 Mar 16 14:12:35.254: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:12:35.254: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 16 14:12:35.254: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 16 14:12:35.275: INFO: Updating stateful set ss2 Mar 16 14:12:35.375: INFO: Waiting for Pod statefulset-7027/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 14:12:45.399: INFO: Updating stateful set ss2 Mar 16 14:12:45.411: INFO: Waiting for StatefulSet statefulset-7027/ss2 to complete update Mar 16 14:12:45.411: INFO: Waiting for Pod statefulset-7027/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 16 14:12:55.418: INFO: Waiting for StatefulSet statefulset-7027/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 14:13:05.417: INFO: Deleting all statefulset in ns statefulset-7027 Mar 16 14:13:05.420: INFO: Scaling statefulset ss2 to 0 Mar 16 14:13:25.436: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:13:25.439: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:13:25.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7027" for this suite. • [SLOW TEST:101.397 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":226,"skipped":3799,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:13:25.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b071fbee-fcd7-4f6a-9ff6-5e14069d4e06 STEP: Creating a pod to test consume configMaps Mar 16 14:13:25.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5" in namespace "projected-2311" to be "Succeeded or Failed" Mar 16 14:13:25.543: INFO: Pod "pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.921472ms Mar 16 14:13:27.567: INFO: Pod "pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029936005s Mar 16 14:13:29.571: INFO: Pod "pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034108667s STEP: Saw pod success Mar 16 14:13:29.571: INFO: Pod "pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5" satisfied condition "Succeeded or Failed" Mar 16 14:13:29.574: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5 container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:13:29.646: INFO: Waiting for pod pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5 to disappear Mar 16 14:13:29.649: INFO: Pod pod-projected-configmaps-1b7d0105-8b37-4a44-bc5a-49c4eb5501d5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:13:29.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2311" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":227,"skipped":3800,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:13:29.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Mar 16 14:13:29.697: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:13:35.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9057" for this suite. • [SLOW TEST:5.527 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":228,"skipped":3803,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:13:35.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5728.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5728.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5728.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5728.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5728.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5728.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 14:13:41.322: INFO: DNS probes using dns-5728/dns-test-037a54ed-c5e6-421f-96eb-f3baf4ddd238 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:13:41.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5728" for this suite. • [SLOW TEST:6.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":229,"skipped":3809,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:13:41.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4854 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4854 STEP: Creating statefulset with conflicting port in namespace statefulset-4854 STEP: Waiting until pod test-pod will start running in namespace statefulset-4854 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4854 Mar 16 14:13:45.970: INFO: Observed stateful pod in namespace: statefulset-4854, name: ss-0, uid: 15f7628d-0550-4044-8f52-b3fbbf005865, status phase: Pending. Waiting for statefulset controller to delete. Mar 16 14:13:46.281: INFO: Observed stateful pod in namespace: statefulset-4854, name: ss-0, uid: 15f7628d-0550-4044-8f52-b3fbbf005865, status phase: Failed. Waiting for statefulset controller to delete. Mar 16 14:13:46.287: INFO: Observed stateful pod in namespace: statefulset-4854, name: ss-0, uid: 15f7628d-0550-4044-8f52-b3fbbf005865, status phase: Failed. Waiting for statefulset controller to delete. Mar 16 14:13:46.298: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4854 STEP: Removing pod with conflicting port in namespace statefulset-4854 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4854 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 14:13:50.394: INFO: Deleting all statefulset in ns statefulset-4854 Mar 16 14:13:50.396: INFO: Scaling statefulset ss to 0 Mar 16 14:14:10.410: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:14:10.413: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:10.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4854" for this suite. • [SLOW TEST:28.959 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":230,"skipped":3877,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:10.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 16 14:14:18.548: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 14:14:18.554: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 14:14:20.554: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 14:14:20.559: INFO: Pod pod-with-prestop-exec-hook still exists Mar 16 14:14:22.554: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 16 14:14:22.559: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:22.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4731" for this suite. • [SLOW TEST:12.137 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3879,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:22.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Mar 16 14:14:22.658: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:39.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7202" for this suite. • [SLOW TEST:16.655 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":232,"skipped":3884,"failed":0} SSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:39.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:39.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4243" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":233,"skipped":3889,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:39.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9035.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 16 14:14:45.520: INFO: DNS probes using dns-9035/dns-test-5bdedb48-e58c-46a9-8761-0031512a82b2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:45.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9035" for this suite. • [SLOW TEST:6.185 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":234,"skipped":3894,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:45.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:14:45.726: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef" in namespace "downward-api-7243" to be "Succeeded or Failed" Mar 16 14:14:45.879: INFO: Pod "downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 153.119748ms Mar 16 14:14:47.884: INFO: Pod "downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157856929s Mar 16 14:14:49.892: INFO: Pod "downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165351106s STEP: Saw pod success Mar 16 14:14:49.892: INFO: Pod "downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef" satisfied condition "Succeeded or Failed" Mar 16 14:14:49.896: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef container client-container: STEP: delete the pod Mar 16 14:14:49.927: INFO: Waiting for pod downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef to disappear Mar 16 14:14:49.946: INFO: Pod downwardapi-volume-dba9ff8d-3393-48ab-9750-80b7f191c1ef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:49.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7243" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":3909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:49.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 16 14:14:50.036: INFO: Waiting up to 5m0s for pod "downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc" in namespace "downward-api-4562" to be "Succeeded or Failed" Mar 16 14:14:50.233: INFO: Pod "downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc": Phase="Pending", Reason="", readiness=false. Elapsed: 196.629883ms Mar 16 14:14:52.236: INFO: Pod "downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200217994s Mar 16 14:14:54.242: INFO: Pod "downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.205328s STEP: Saw pod success Mar 16 14:14:54.242: INFO: Pod "downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc" satisfied condition "Succeeded or Failed" Mar 16 14:14:54.245: INFO: Trying to get logs from node latest-worker pod downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc container dapi-container: STEP: delete the pod Mar 16 14:14:54.265: INFO: Waiting for pod downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc to disappear Mar 16 14:14:54.290: INFO: Pod downward-api-17edb21e-9bb4-4236-b108-d47ca77769bc no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:54.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4562" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":3945,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:54.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:14:54.364: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621" in namespace "downward-api-4700" to be "Succeeded or Failed" Mar 16 14:14:54.375: INFO: Pod "downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621": Phase="Pending", Reason="", readiness=false. Elapsed: 11.065536ms Mar 16 14:14:56.379: INFO: Pod "downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015049115s Mar 16 14:14:58.383: INFO: Pod "downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019355666s STEP: Saw pod success Mar 16 14:14:58.384: INFO: Pod "downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621" satisfied condition "Succeeded or Failed" Mar 16 14:14:58.386: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621 container client-container: STEP: delete the pod Mar 16 14:14:58.409: INFO: Waiting for pod downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621 to disappear Mar 16 14:14:58.411: INFO: Pod downwardapi-volume-a8b5331b-17c6-4588-a528-a0a93267c621 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:14:58.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4700" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":3970,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:14:58.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:14:58.485: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb" in namespace "projected-9495" to be "Succeeded or Failed" Mar 16 14:14:58.500: INFO: Pod "downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb": Phase="Pending", Reason="", readiness=false. Elapsed: 14.53707ms Mar 16 14:15:00.505: INFO: Pod "downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019496776s Mar 16 14:15:02.509: INFO: Pod "downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024026046s STEP: Saw pod success Mar 16 14:15:02.510: INFO: Pod "downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb" satisfied condition "Succeeded or Failed" Mar 16 14:15:02.512: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb container client-container: STEP: delete the pod Mar 16 14:15:02.544: INFO: Waiting for pod downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb to disappear Mar 16 14:15:02.555: INFO: Pod downwardapi-volume-2887351b-a5c5-4bff-ac63-9b5570e88edb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:02.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9495" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":238,"skipped":3980,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:02.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:15:02.651: INFO: (0) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 5.490418ms) Mar 16 14:15:02.655: INFO: (1) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.852278ms) Mar 16 14:15:02.659: INFO: (2) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.444101ms) Mar 16 14:15:02.663: INFO: (3) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.763395ms) Mar 16 14:15:02.666: INFO: (4) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.868587ms) Mar 16 14:15:02.670: INFO: (5) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.715798ms) Mar 16 14:15:02.674: INFO: (6) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.672433ms) Mar 16 14:15:02.681: INFO: (7) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 7.225964ms) Mar 16 14:15:02.684: INFO: (8) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.221342ms) Mar 16 14:15:02.712: INFO: (9) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 27.868204ms) Mar 16 14:15:02.716: INFO: (10) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.537615ms) Mar 16 14:15:02.719: INFO: (11) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.370076ms) Mar 16 14:15:02.722: INFO: (12) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.131287ms) Mar 16 14:15:02.726: INFO: (13) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 3.100407ms) Mar 16 14:15:02.728: INFO: (14) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.70477ms) Mar 16 14:15:02.731: INFO: (15) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.667569ms) Mar 16 14:15:02.734: INFO: (16) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.530086ms) Mar 16 14:15:02.736: INFO: (17) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.550356ms) Mar 16 14:15:02.739: INFO: (18) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.969978ms) Mar 16 14:15:02.742: INFO: (19) /api/v1/nodes/latest-worker2/proxy/logs/:
containers/
pods/
(200; 2.691842ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:02.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7846" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":275,"completed":239,"skipped":3989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:02.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 16 14:15:02.844: INFO: Waiting up to 5m0s for pod "downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c" in namespace "downward-api-6515" to be "Succeeded or Failed" Mar 16 14:15:02.849: INFO: Pod "downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.88742ms Mar 16 14:15:04.852: INFO: Pod "downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008379892s Mar 16 14:15:06.855: INFO: Pod "downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011487967s STEP: Saw pod success Mar 16 14:15:06.855: INFO: Pod "downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c" satisfied condition "Succeeded or Failed" Mar 16 14:15:06.858: INFO: Trying to get logs from node latest-worker2 pod downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c container dapi-container: STEP: delete the pod Mar 16 14:15:06.920: INFO: Waiting for pod downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c to disappear Mar 16 14:15:06.934: INFO: Pod downward-api-b9d7c48d-17fb-4a5a-ab43-66ab7960351c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:06.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6515" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4026,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:06.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 16 14:15:13.916: INFO: 10 pods remaining Mar 16 14:15:13.916: INFO: 0 pods has nil DeletionTimestamp Mar 16 14:15:13.916: INFO: Mar 16 14:15:14.677: INFO: 0 pods remaining Mar 16 14:15:14.677: INFO: 0 pods has nil DeletionTimestamp Mar 16 14:15:14.677: INFO: STEP: Gathering metrics W0316 14:15:15.648748 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:15:15.648: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:15.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3128" for this suite. • [SLOW TEST:8.717 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":241,"skipped":4039,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:15.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:15:16.056: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:22.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2881" for this suite. • [SLOW TEST:6.726 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":242,"skipped":4083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:22.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-a63282b4-ceda-4a59-b343-fc670535fdf6 STEP: Creating a pod to test consume configMaps Mar 16 14:15:22.491: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e" in namespace "projected-8458" to be "Succeeded or Failed" Mar 16 14:15:22.496: INFO: Pod "pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.601356ms Mar 16 14:15:24.500: INFO: Pod "pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008749618s Mar 16 14:15:26.504: INFO: Pod "pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012816133s STEP: Saw pod success Mar 16 14:15:26.504: INFO: Pod "pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e" satisfied condition "Succeeded or Failed" Mar 16 14:15:26.507: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e container projected-configmap-volume-test: STEP: delete the pod Mar 16 14:15:26.542: INFO: Waiting for pod pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e to disappear Mar 16 14:15:26.556: INFO: Pod pod-projected-configmaps-9b41e8a2-73e4-4e37-9365-20d56ae74f7e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:26.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8458" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":243,"skipped":4124,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:26.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Mar 16 14:15:26.626: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 16 14:15:26.648: INFO: Waiting for terminating namespaces to be deleted... Mar 16 14:15:26.650: INFO: Logging pods the kubelet thinks is on node latest-worker before test Mar 16 14:15:26.655: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:15:26.655: INFO: Container kube-proxy ready: true, restart count 0 Mar 16 14:15:26.655: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:15:26.655: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:15:26.655: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Mar 16 14:15:26.660: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:15:26.660: INFO: Container kindnet-cni ready: true, restart count 0 Mar 16 14:15:26.660: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Mar 16 14:15:26.660: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7b815a3d-bd57-4cbc-9477-3b591a207fa0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-7b815a3d-bd57-4cbc-9477-3b591a207fa0 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7b815a3d-bd57-4cbc-9477-3b591a207fa0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:34.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2720" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.237 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":244,"skipped":4124,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:34.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:15:34.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39" in namespace "projected-2115" to be "Succeeded or Failed" Mar 16 14:15:34.904: INFO: Pod "downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39": Phase="Pending", Reason="", readiness=false. Elapsed: 13.477162ms Mar 16 14:15:36.909: INFO: Pod "downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019147728s Mar 16 14:15:38.913: INFO: Pod "downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023084499s STEP: Saw pod success Mar 16 14:15:38.913: INFO: Pod "downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39" satisfied condition "Succeeded or Failed" Mar 16 14:15:38.916: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39 container client-container: STEP: delete the pod Mar 16 14:15:38.947: INFO: Waiting for pod downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39 to disappear Mar 16 14:15:38.952: INFO: Pod downwardapi-volume-d0cf3e09-545f-4933-acd7-10963e3ffb39 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:38.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2115" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4137,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:38.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-c0035b11-765e-439b-bd98-49bd1a52e1ad STEP: Creating a pod to test consume secrets Mar 16 14:15:39.091: INFO: Waiting up to 5m0s for pod "pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c" in namespace "secrets-5123" to be "Succeeded or Failed" Mar 16 14:15:39.095: INFO: Pod "pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.92967ms Mar 16 14:15:41.099: INFO: Pod "pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007499061s Mar 16 14:15:43.102: INFO: Pod "pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011047195s STEP: Saw pod success Mar 16 14:15:43.102: INFO: Pod "pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c" satisfied condition "Succeeded or Failed" Mar 16 14:15:43.105: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c container secret-volume-test: STEP: delete the pod Mar 16 14:15:43.127: INFO: Waiting for pod pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c to disappear Mar 16 14:15:43.131: INFO: Pod pod-secrets-6e7459e8-f544-42c3-8d1e-9b1bfa12ea4c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:43.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5123" for this suite. STEP: Destroying namespace "secret-namespace-5675" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4158,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:43.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:15:43.240: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 16 14:15:48.243: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 16 14:15:48.243: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Mar 16 14:15:48.296: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9617 /apis/apps/v1/namespaces/deployment-9617/deployments/test-cleanup-deployment f954bcae-c9d3-4755-bd9e-e37b055f847b 291518 1 2020-03-16 14:15:48 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005972618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 16 14:15:48.343: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-9617 /apis/apps/v1/namespaces/deployment-9617/replicasets/test-cleanup-deployment-577c77b589 3eec732b-9466-4661-be29-cda3b24d1905 291527 1 2020-03-16 14:15:48 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f954bcae-c9d3-4755-bd9e-e37b055f847b 0xc005972c27 0xc005972c28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005972d78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 16 14:15:48.343: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 16 14:15:48.343: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9617 /apis/apps/v1/namespaces/deployment-9617/replicasets/test-cleanup-controller 13e9c2e0-7dc6-4e31-aefa-7263e8017d68 291521 1 2020-03-16 14:15:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment f954bcae-c9d3-4755-bd9e-e37b055f847b 0xc005972b27 0xc005972b28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005972ba8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 16 14:15:48.367: INFO: Pod "test-cleanup-controller-8tfwn" is available: &Pod{ObjectMeta:{test-cleanup-controller-8tfwn test-cleanup-controller- deployment-9617 /api/v1/namespaces/deployment-9617/pods/test-cleanup-controller-8tfwn dfc2fce0-a115-4b8d-a8ac-ac1ef493924e 291504 0 2020-03-16 14:15:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 13e9c2e0-7dc6-4e31-aefa-7263e8017d68 0xc005973417 0xc005973418}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4fngf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4fngf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4fngf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:15:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:15:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:15:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:15:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.222,StartTime:2020-03-16 14:15:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 14:15:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b742c8bfbc38d4dd1411ff900e68351f5d5735fd86deb0ec892bec4b8ddf9d57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 16 14:15:48.367: INFO: Pod "test-cleanup-deployment-577c77b589-mjw9l" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-mjw9l test-cleanup-deployment-577c77b589- deployment-9617 /api/v1/namespaces/deployment-9617/pods/test-cleanup-deployment-577c77b589-mjw9l c620951c-e0ec-46a1-a450-82a563df5517 291528 0 2020-03-16 14:15:48 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 3eec732b-9466-4661-be29-cda3b24d1905 0xc0059735a7 0xc0059735a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4fngf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4fngf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4fngf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:15:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:48.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9617" for this suite. • [SLOW TEST:5.252 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":247,"skipped":4175,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:48.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:15:48.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5343" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":248,"skipped":4177,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:15:48.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-68acdc30-3691-43a8-bb4b-ba6ce1754930 in namespace container-probe-2478 Mar 16 14:15:52.670: INFO: Started pod liveness-68acdc30-3691-43a8-bb4b-ba6ce1754930 in namespace container-probe-2478 STEP: checking the pod's current state and verifying that restartCount is present Mar 16 14:15:52.673: INFO: Initial restart count of pod liveness-68acdc30-3691-43a8-bb4b-ba6ce1754930 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:19:53.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2478" for this suite. • [SLOW TEST:245.858 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:19:54.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-8508/configmap-test-6380784e-38d5-4c06-abf3-e6086056d4a1 STEP: Creating a pod to test consume configMaps Mar 16 14:19:54.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1" in namespace "configmap-8508" to be "Succeeded or Failed" Mar 16 14:19:54.508: INFO: Pod "pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.028568ms Mar 16 14:19:56.518: INFO: Pod "pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01737723s Mar 16 14:19:58.523: INFO: Pod "pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022149433s STEP: Saw pod success Mar 16 14:19:58.523: INFO: Pod "pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1" satisfied condition "Succeeded or Failed" Mar 16 14:19:58.526: INFO: Trying to get logs from node latest-worker pod pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1 container env-test: STEP: delete the pod Mar 16 14:19:58.574: INFO: Waiting for pod pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1 to disappear Mar 16 14:19:58.596: INFO: Pod pod-configmaps-55c88cb7-842f-4062-8eed-e59d084df4c1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:19:58.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8508" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4280,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:19:58.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Mar 16 14:19:58.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Mar 16 14:19:58.814: INFO: stderr: "" Mar 16 14:19:58.814: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:19:58.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2732" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":251,"skipped":4298,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:19:58.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-fa0c6e4c-4198-46f0-8528-f5920ac564ed STEP: Creating a pod to test consume secrets Mar 16 14:19:58.893: INFO: Waiting up to 5m0s for pod "pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b" in namespace "secrets-4400" to be "Succeeded or Failed" Mar 16 14:19:58.897: INFO: Pod "pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864286ms Mar 16 14:20:00.901: INFO: Pod "pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008079966s Mar 16 14:20:02.906: INFO: Pod "pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012579421s STEP: Saw pod success Mar 16 14:20:02.906: INFO: Pod "pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b" satisfied condition "Succeeded or Failed" Mar 16 14:20:02.909: INFO: Trying to get logs from node latest-worker pod pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b container secret-volume-test: STEP: delete the pod Mar 16 14:20:02.928: INFO: Waiting for pod pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b to disappear Mar 16 14:20:02.933: INFO: Pod pod-secrets-732db7ab-89af-4736-ab24-8dba36aa6a3b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:20:02.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4400" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4355,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:20:02.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 16 14:20:07.060: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:20:07.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2094" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4365,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:20:07.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3131 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3131 STEP: creating replication controller externalsvc in namespace services-3131 I0316 14:20:07.276716 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3131, replica count: 2 I0316 14:20:10.327252 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0316 14:20:13.327550 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 16 14:20:13.353: INFO: Creating new exec pod Mar 16 14:20:17.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3131 execpod59qbh -- /bin/sh -x -c nslookup clusterip-service' Mar 16 14:20:20.464: INFO: stderr: "I0316 14:20:20.366280 3211 log.go:172] (0xc00077a580) (0xc0006ff540) Create stream\nI0316 14:20:20.366324 3211 log.go:172] (0xc00077a580) (0xc0006ff540) Stream added, broadcasting: 1\nI0316 14:20:20.368548 3211 log.go:172] (0xc00077a580) Reply frame received for 1\nI0316 14:20:20.368602 3211 log.go:172] (0xc00077a580) (0xc0007de000) Create stream\nI0316 14:20:20.368620 3211 log.go:172] (0xc00077a580) (0xc0007de000) Stream added, broadcasting: 3\nI0316 14:20:20.369972 3211 log.go:172] (0xc00077a580) Reply frame received for 3\nI0316 14:20:20.370002 3211 log.go:172] (0xc00077a580) (0xc0006ff5e0) Create stream\nI0316 14:20:20.370012 3211 log.go:172] (0xc00077a580) (0xc0006ff5e0) Stream added, broadcasting: 5\nI0316 14:20:20.370912 3211 log.go:172] (0xc00077a580) Reply frame received for 5\nI0316 14:20:20.446114 3211 log.go:172] (0xc00077a580) Data frame received for 5\nI0316 14:20:20.446142 3211 log.go:172] (0xc0006ff5e0) (5) Data frame handling\nI0316 14:20:20.446163 3211 log.go:172] (0xc0006ff5e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0316 14:20:20.454415 3211 log.go:172] (0xc00077a580) Data frame received for 3\nI0316 14:20:20.454452 3211 log.go:172] (0xc0007de000) (3) Data frame handling\nI0316 14:20:20.454475 3211 log.go:172] (0xc0007de000) (3) Data frame sent\nI0316 14:20:20.455972 3211 log.go:172] (0xc00077a580) Data frame received for 3\nI0316 14:20:20.456009 3211 log.go:172] (0xc0007de000) (3) Data frame handling\nI0316 14:20:20.456138 3211 log.go:172] (0xc0007de000) (3) Data frame sent\nI0316 14:20:20.456334 3211 log.go:172] (0xc00077a580) Data frame received for 5\nI0316 14:20:20.456383 3211 log.go:172] (0xc0006ff5e0) (5) Data frame handling\nI0316 14:20:20.456414 3211 log.go:172] (0xc00077a580) Data frame received for 3\nI0316 14:20:20.456437 3211 log.go:172] (0xc0007de000) (3) Data frame handling\nI0316 14:20:20.458825 3211 log.go:172] (0xc00077a580) Data frame received for 1\nI0316 14:20:20.458865 3211 log.go:172] (0xc0006ff540) (1) Data frame handling\nI0316 14:20:20.458896 3211 log.go:172] (0xc0006ff540) (1) Data frame sent\nI0316 14:20:20.458929 3211 log.go:172] (0xc00077a580) (0xc0006ff540) Stream removed, broadcasting: 1\nI0316 14:20:20.458947 3211 log.go:172] (0xc00077a580) Go away received\nI0316 14:20:20.459430 3211 log.go:172] (0xc00077a580) (0xc0006ff540) Stream removed, broadcasting: 1\nI0316 14:20:20.459456 3211 log.go:172] (0xc00077a580) (0xc0007de000) Stream removed, broadcasting: 3\nI0316 14:20:20.459469 3211 log.go:172] (0xc00077a580) (0xc0006ff5e0) Stream removed, broadcasting: 5\n" Mar 16 14:20:20.464: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3131.svc.cluster.local\tcanonical name = externalsvc.services-3131.svc.cluster.local.\nName:\texternalsvc.services-3131.svc.cluster.local\nAddress: 10.96.30.64\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3131, will wait for the garbage collector to delete the pods Mar 16 14:20:20.523: INFO: Deleting ReplicationController externalsvc took: 5.332748ms Mar 16 14:20:20.823: INFO: Terminating ReplicationController externalsvc pods took: 300.245342ms Mar 16 14:20:33.048: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:20:33.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3131" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:25.936 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":254,"skipped":4366,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:20:33.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5531 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating statefulset ss in namespace statefulset-5531 Mar 16 14:20:33.167: INFO: Found 0 stateful pods, waiting for 1 Mar 16 14:20:43.174: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Mar 16 14:20:43.191: INFO: Deleting all statefulset in ns statefulset-5531 Mar 16 14:20:43.198: INFO: Scaling statefulset ss to 0 Mar 16 14:21:03.271: INFO: Waiting for statefulset status.replicas updated to 0 Mar 16 14:21:03.274: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:03.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5531" for this suite. • [SLOW TEST:30.222 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":255,"skipped":4367,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:03.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:19.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1547" for this suite. • [SLOW TEST:16.120 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":256,"skipped":4378,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:19.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-bb8a7a35-3567-45d2-a479-1bf60cb715bb STEP: Creating a pod to test consume secrets Mar 16 14:21:19.791: INFO: Waiting up to 5m0s for pod "pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503" in namespace "secrets-4297" to be "Succeeded or Failed" Mar 16 14:21:19.806: INFO: Pod "pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503": Phase="Pending", Reason="", readiness=false. Elapsed: 15.558819ms Mar 16 14:21:21.869: INFO: Pod "pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078243883s Mar 16 14:21:23.873: INFO: Pod "pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081977241s STEP: Saw pod success Mar 16 14:21:23.873: INFO: Pod "pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503" satisfied condition "Succeeded or Failed" Mar 16 14:21:23.875: INFO: Trying to get logs from node latest-worker pod pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503 container secret-volume-test: STEP: delete the pod Mar 16 14:21:23.894: INFO: Waiting for pod pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503 to disappear Mar 16 14:21:23.974: INFO: Pod pod-secrets-25bf70bd-6e1b-4295-a215-afc49ce8a503 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:23.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4297" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4405,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:23.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:21:24.103: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:28.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2971" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:28.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name secret-emptykey-test-58b47bd7-64dc-4347-80a2-af4c42716bef [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:28.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9321" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":259,"skipped":4441,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:28.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-6vjf STEP: Creating a pod to test atomic-volume-subpath Mar 16 14:21:28.413: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6vjf" in namespace "subpath-9113" to be "Succeeded or Failed" Mar 16 14:21:28.416: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.697345ms Mar 16 14:21:30.426: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012907957s Mar 16 14:21:32.431: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 4.017492472s Mar 16 14:21:34.435: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 6.021311749s Mar 16 14:21:36.439: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 8.025579795s Mar 16 14:21:38.447: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 10.034015459s Mar 16 14:21:40.452: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 12.038143495s Mar 16 14:21:42.456: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 14.042508904s Mar 16 14:21:44.460: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 16.046432502s Mar 16 14:21:46.463: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 18.049846179s Mar 16 14:21:48.467: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 20.053174156s Mar 16 14:21:50.471: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Running", Reason="", readiness=true. Elapsed: 22.057206332s Mar 16 14:21:52.475: INFO: Pod "pod-subpath-test-downwardapi-6vjf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061705265s STEP: Saw pod success Mar 16 14:21:52.475: INFO: Pod "pod-subpath-test-downwardapi-6vjf" satisfied condition "Succeeded or Failed" Mar 16 14:21:52.477: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-6vjf container test-container-subpath-downwardapi-6vjf: STEP: delete the pod Mar 16 14:21:52.547: INFO: Waiting for pod pod-subpath-test-downwardapi-6vjf to disappear Mar 16 14:21:52.558: INFO: Pod pod-subpath-test-downwardapi-6vjf no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-6vjf Mar 16 14:21:52.558: INFO: Deleting pod "pod-subpath-test-downwardapi-6vjf" in namespace "subpath-9113" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:21:52.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9113" for this suite. • [SLOW TEST:24.333 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":260,"skipped":4447,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:21:52.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0316 14:22:02.676340 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 16 14:22:02.676: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:22:02.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5988" for this suite. • [SLOW TEST:10.116 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":261,"skipped":4478,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:22:02.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 16 14:22:06.812: INFO: &Pod{ObjectMeta:{send-events-edcf61fe-bbe8-4366-9c0d-8cd2e9f6f749 events-3139 /api/v1/namespaces/events-3139/pods/send-events-edcf61fe-bbe8-4366-9c0d-8cd2e9f6f749 8d6d4ae0-d76a-4781-b6b5-0dc081470125 293112 0 2020-03-16 14:22:02 +0000 UTC map[name:foo time:734568871] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l6kg7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l6kg7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l6kg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:22:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:22:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:22:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-16 14:22:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.143,StartTime:2020-03-16 14:22:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-16 14:22:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://551de5edf92eec4a35e1b912b2ee69237529b309c12af165a9433f765038927f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.143,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 16 14:22:08.816: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 16 14:22:10.820: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:22:10.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3139" for this suite. • [SLOW TEST:8.152 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":262,"skipped":4483,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:22:10.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:22:10.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48" in namespace "downward-api-6977" to be "Succeeded or Failed" Mar 16 14:22:10.929: INFO: Pod "downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48": Phase="Pending", Reason="", readiness=false. Elapsed: 18.73318ms Mar 16 14:22:12.932: INFO: Pod "downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022063786s Mar 16 14:22:14.937: INFO: Pod "downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026381386s STEP: Saw pod success Mar 16 14:22:14.937: INFO: Pod "downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48" satisfied condition "Succeeded or Failed" Mar 16 14:22:14.940: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48 container client-container: STEP: delete the pod Mar 16 14:22:15.001: INFO: Waiting for pod downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48 to disappear Mar 16 14:22:15.005: INFO: Pod downwardapi-volume-541d99fa-1905-4627-a505-5e478f4d1c48 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:22:15.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6977" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":263,"skipped":4514,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:22:15.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:22:15.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537" in namespace "downward-api-9144" to be "Succeeded or Failed" Mar 16 14:22:15.087: INFO: Pod "downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537": Phase="Pending", Reason="", readiness=false. Elapsed: 9.705308ms Mar 16 14:22:17.091: INFO: Pod "downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014177948s Mar 16 14:22:19.095: INFO: Pod "downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017853647s STEP: Saw pod success Mar 16 14:22:19.095: INFO: Pod "downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537" satisfied condition "Succeeded or Failed" Mar 16 14:22:19.098: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537 container client-container: STEP: delete the pod Mar 16 14:22:19.159: INFO: Waiting for pod downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537 to disappear Mar 16 14:22:19.184: INFO: Pod downwardapi-volume-513e193f-fa68-4e5b-bc62-ae0320c89537 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:22:19.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9144" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4528,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:22:19.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 14:22:19.870: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 14:22:21.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965339, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965339, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965339, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965339, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 14:22:24.934: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:22:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2599" for this suite. STEP: Destroying namespace "webhook-2599-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":265,"skipped":4542,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:22:35.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 16 14:22:35.309: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293329 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:22:35.309: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293329 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 16 14:22:45.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293385 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:22:45.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293385 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 16 14:22:55.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293417 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:22:55.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293417 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 16 14:23:05.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293447 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:23:05.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-a f0252f88-4828-4920-b58d-02e5c0b729de 293447 0 2020-03-16 14:22:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 16 14:23:15.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-b 00064f38-cae6-43a5-91e6-6c38d6d267a8 293477 0 2020-03-16 14:23:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:23:15.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-b 00064f38-cae6-43a5-91e6-6c38d6d267a8 293477 0 2020-03-16 14:23:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 16 14:23:25.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-b 00064f38-cae6-43a5-91e6-6c38d6d267a8 293507 0 2020-03-16 14:23:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Mar 16 14:23:25.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9159 /api/v1/namespaces/watch-9159/configmaps/e2e-watch-test-configmap-b 00064f38-cae6-43a5-91e6-6c38d6d267a8 293507 0 2020-03-16 14:23:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:23:35.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9159" for this suite. • [SLOW TEST:60.171 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":266,"skipped":4557,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:23:35.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 16 14:23:35.949: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 16 14:23:37.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965415, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965415, loc:(*time.Location)(0x7b1e080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965416, loc:(*time.Location)(0x7b1e080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63719965415, loc:(*time.Location)(0x7b1e080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 16 14:23:40.987: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Mar 16 14:23:40.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4003-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:23:42.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1071" for this suite. STEP: Destroying namespace "webhook-1071-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.818 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":267,"skipped":4558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:23:42.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Mar 16 14:23:42.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468" in namespace "projected-5955" to be "Succeeded or Failed" Mar 16 14:23:42.254: INFO: Pod "downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468": Phase="Pending", Reason="", readiness=false. Elapsed: 3.648312ms Mar 16 14:23:44.259: INFO: Pod "downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933764s Mar 16 14:23:46.263: INFO: Pod "downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012353093s STEP: Saw pod success Mar 16 14:23:46.263: INFO: Pod "downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468" satisfied condition "Succeeded or Failed" Mar 16 14:23:46.266: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468 container client-container: STEP: delete the pod Mar 16 14:23:46.299: INFO: Waiting for pod downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468 to disappear Mar 16 14:23:46.347: INFO: Pod downwardapi-volume-d8d323fd-dfc9-4b35-a185-dcaa06673468 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:23:46.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5955" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:23:46.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1520, will wait for the garbage collector to delete the pods Mar 16 14:23:52.488: INFO: Deleting Job.batch foo took: 5.384132ms Mar 16 14:23:52.588: INFO: Terminating Job.batch foo pods took: 100.217785ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:24:32.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1520" for this suite. • [SLOW TEST:46.444 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":269,"skipped":4656,"failed":0} [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:24:32.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Mar 16 14:24:32.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-9495 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 16 14:24:32.940: INFO: stderr: "" Mar 16 14:24:32.940: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Mar 16 14:24:32.940: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 16 14:24:32.940: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-9495" to be "running and ready, or succeeded" Mar 16 14:24:32.967: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 27.315523ms Mar 16 14:24:34.972: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031834006s Mar 16 14:24:36.976: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.035779499s Mar 16 14:24:36.976: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 16 14:24:36.976: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 16 14:24:36.976: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495' Mar 16 14:24:37.093: INFO: stderr: "" Mar 16 14:24:37.093: INFO: stdout: "I0316 14:24:35.081828 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/2vrz 313\nI0316 14:24:35.282119 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/z4q 554\nI0316 14:24:35.482067 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/fvpq 559\nI0316 14:24:35.682073 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/nmj 313\nI0316 14:24:35.882016 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/jkp 371\nI0316 14:24:36.082006 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/5sl 378\nI0316 14:24:36.282084 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/w4m 476\nI0316 14:24:36.481998 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mrm 467\nI0316 14:24:36.682039 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/68hw 380\nI0316 14:24:36.882022 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/n5h7 340\nI0316 14:24:37.082093 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgc 516\n" STEP: limiting log lines Mar 16 14:24:37.093: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495 --tail=1' Mar 16 14:24:37.195: INFO: stderr: "" Mar 16 14:24:37.196: INFO: stdout: "I0316 14:24:37.082093 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgc 516\n" Mar 16 14:24:37.196: INFO: got output "I0316 14:24:37.082093 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgc 516\n" STEP: limiting log bytes Mar 16 14:24:37.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495 --limit-bytes=1' Mar 16 14:24:37.296: INFO: stderr: "" Mar 16 14:24:37.296: INFO: stdout: "I" Mar 16 14:24:37.296: INFO: got output "I" STEP: exposing timestamps Mar 16 14:24:37.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495 --tail=1 --timestamps' Mar 16 14:24:37.401: INFO: stderr: "" Mar 16 14:24:37.401: INFO: stdout: "2020-03-16T14:24:37.282234773Z I0316 14:24:37.281985 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/c4j7 262\n" Mar 16 14:24:37.401: INFO: got output "2020-03-16T14:24:37.282234773Z I0316 14:24:37.281985 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/c4j7 262\n" STEP: restricting to a time range Mar 16 14:24:39.902: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495 --since=1s' Mar 16 14:24:40.011: INFO: stderr: "" Mar 16 14:24:40.012: INFO: stdout: "I0316 14:24:39.081957 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/8gq 498\nI0316 14:24:39.281976 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/phn 370\nI0316 14:24:39.482030 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4tj 535\nI0316 14:24:39.682010 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/8lsr 423\nI0316 14:24:39.882007 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/q9w 446\n" Mar 16 14:24:40.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-9495 --since=24h' Mar 16 14:24:40.127: INFO: stderr: "" Mar 16 14:24:40.127: INFO: stdout: "I0316 14:24:35.081828 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/2vrz 313\nI0316 14:24:35.282119 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/z4q 554\nI0316 14:24:35.482067 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/fvpq 559\nI0316 14:24:35.682073 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/nmj 313\nI0316 14:24:35.882016 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/jkp 371\nI0316 14:24:36.082006 1 logs_generator.go:76] 5 GET /api/v1/namespaces/default/pods/5sl 378\nI0316 14:24:36.282084 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/w4m 476\nI0316 14:24:36.481998 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/mrm 467\nI0316 14:24:36.682039 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/68hw 380\nI0316 14:24:36.882022 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/n5h7 340\nI0316 14:24:37.082093 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/wgc 516\nI0316 14:24:37.281985 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/c4j7 262\nI0316 14:24:37.482007 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/zv8 587\nI0316 14:24:37.682019 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/5kbh 488\nI0316 14:24:37.881993 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/m4r 475\nI0316 14:24:38.081999 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/5zk 367\nI0316 14:24:38.282004 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/8sl 429\nI0316 14:24:38.482007 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/bxc 268\nI0316 14:24:38.682029 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/gmz 429\nI0316 14:24:38.882055 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/mpc 409\nI0316 14:24:39.081957 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/8gq 498\nI0316 14:24:39.281976 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/phn 370\nI0316 14:24:39.482030 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/4tj 535\nI0316 14:24:39.682010 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/8lsr 423\nI0316 14:24:39.882007 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/q9w 446\nI0316 14:24:40.081972 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/l6mb 209\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Mar 16 14:24:40.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-9495' Mar 16 14:24:52.985: INFO: stderr: "" Mar 16 14:24:52.985: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:24:52.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9495" for this suite. • [SLOW TEST:20.273 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":270,"skipped":4656,"failed":0} [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:24:53.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:24:53.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9726" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":271,"skipped":4656,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:24:53.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Mar 16 14:24:53.194: INFO: Waiting up to 5m0s for pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249" in namespace "downward-api-3638" to be "Succeeded or Failed" Mar 16 14:24:53.197: INFO: Pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895846ms Mar 16 14:24:55.201: INFO: Pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007062674s Mar 16 14:24:57.206: INFO: Pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249": Phase="Running", Reason="", readiness=true. Elapsed: 4.011578112s Mar 16 14:24:59.210: INFO: Pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016052898s STEP: Saw pod success Mar 16 14:24:59.211: INFO: Pod "downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249" satisfied condition "Succeeded or Failed" Mar 16 14:24:59.214: INFO: Trying to get logs from node latest-worker pod downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249 container dapi-container: STEP: delete the pod Mar 16 14:24:59.243: INFO: Waiting for pod downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249 to disappear Mar 16 14:24:59.255: INFO: Pod downward-api-9838fcfd-33d7-40bc-99e3-f896e2398249 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:24:59.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3638" for this suite. • [SLOW TEST:6.141 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4660,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:24:59.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 16 14:25:09.365: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 14:25:09.427: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 14:25:11.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 14:25:11.431: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 14:25:13.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 14:25:13.431: INFO: Pod pod-with-poststart-http-hook still exists Mar 16 14:25:15.427: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 16 14:25:15.432: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:25:15.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6973" for this suite. • [SLOW TEST:16.177 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4673,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:25:15.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:25:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7030" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":274,"skipped":4687,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Mar 16 14:25:15.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-1dd15eaa-3582-45a5-8da0-97951d1bb10b STEP: Creating a pod to test consume secrets Mar 16 14:25:15.698: INFO: Waiting up to 5m0s for pod "pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba" in namespace "secrets-4862" to be "Succeeded or Failed" Mar 16 14:25:15.702: INFO: Pod "pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.844982ms Mar 16 14:25:17.706: INFO: Pod "pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008246465s Mar 16 14:25:19.710: INFO: Pod "pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012551308s STEP: Saw pod success Mar 16 14:25:19.710: INFO: Pod "pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba" satisfied condition "Succeeded or Failed" Mar 16 14:25:19.713: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba container secret-volume-test: STEP: delete the pod Mar 16 14:25:19.726: INFO: Waiting for pod pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba to disappear Mar 16 14:25:19.750: INFO: Pod pod-secrets-f79c9d44-0a17-480f-bafe-66c2d77fdfba no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Mar 16 14:25:19.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4862" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":275,"skipped":4692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSMar 16 14:25:19.760: INFO: Running AfterSuite actions on all nodes Mar 16 14:25:19.760: INFO: Running AfterSuite actions on node 1 Mar 16 14:25:19.760: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0} Ran 275 of 4992 Specs in 4910.843 seconds SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped PASS