I0212 20:26:53.683768 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0212 20:26:53.684652 9 e2e.go:110] Starting e2e run "94c6791d-b4b4-49b4-91fb-bb25701d34ed" on Ginkgo node 1 {"msg":"Test Suite starting","total":277,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581539212 - Will randomize all specs Will run 277 of 4841 specs Feb 12 20:26:53.784: INFO: >>> kubeConfig: /root/.kube/config Feb 12 20:26:53.792: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 12 20:26:53.831: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 12 20:26:53.889: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 12 20:26:53.889: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 12 20:26:53.890: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 12 20:26:53.907: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 12 20:26:53.907: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 12 20:26:53.907: INFO: e2e test version: v1.18.0-alpha.4.4+6541758fd4d9fc Feb 12 20:26:53.909: INFO: kube-apiserver version: v1.17.0 Feb 12 20:26:53.910: INFO: >>> kubeConfig: /root/.kube/config Feb 12 20:26:53.968: INFO: Cluster IP family: ipv4 SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:26:53.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi Feb 12 20:26:54.294: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:26:54.296: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 12 20:26:57.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3558 create -f -' Feb 12 20:27:00.361: INFO: stderr: "" Feb 12 20:27:00.361: INFO: stdout: "e2e-test-crd-publish-openapi-5934-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 12 20:27:00.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3558 delete e2e-test-crd-publish-openapi-5934-crds test-cr' Feb 12 20:27:00.546: INFO: stderr: "" Feb 12 20:27:00.546: INFO: stdout: "e2e-test-crd-publish-openapi-5934-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Feb 12 20:27:00.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3558 apply -f -' Feb 12 20:27:00.896: INFO: stderr: "" Feb 12 20:27:00.896: INFO: stdout: "e2e-test-crd-publish-openapi-5934-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Feb 12 20:27:00.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3558 delete e2e-test-crd-publish-openapi-5934-crds test-cr' Feb 12 20:27:01.126: INFO: stderr: "" Feb 12 20:27:01.127: INFO: stdout: "e2e-test-crd-publish-openapi-5934-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 12 20:27:01.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5934-crds' Feb 12 20:27:01.548: INFO: stderr: "" Feb 12 20:27:01.548: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5934-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:27:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3558" for this suite. • [SLOW TEST:11.307 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":277,"completed":1,"skipped":12,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:27:05.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2040 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2040 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2040 Feb 12 20:27:05.408: INFO: Found 0 stateful pods, waiting for 1 Feb 12 20:27:15.431: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 12 20:27:15.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:27:15.825: INFO: stderr: "I0212 20:27:15.598400 149 log.go:172] (0xc000a3d6b0) (0xc0009da820) Create stream\nI0212 20:27:15.598579 149 log.go:172] (0xc000a3d6b0) (0xc0009da820) Stream added, broadcasting: 1\nI0212 20:27:15.612712 149 log.go:172] (0xc000a3d6b0) Reply frame received for 1\nI0212 20:27:15.612775 149 log.go:172] (0xc000a3d6b0) (0xc000b38000) Create stream\nI0212 20:27:15.612795 149 log.go:172] (0xc000a3d6b0) (0xc000b38000) Stream added, broadcasting: 3\nI0212 20:27:15.615102 149 log.go:172] (0xc000a3d6b0) Reply frame received for 3\nI0212 20:27:15.615212 149 log.go:172] (0xc000a3d6b0) (0xc0009da000) Create stream\nI0212 20:27:15.615235 149 log.go:172] (0xc000a3d6b0) (0xc0009da000) Stream added, broadcasting: 5\nI0212 20:27:15.618305 149 log.go:172] (0xc000a3d6b0) Reply frame received for 5\nI0212 20:27:15.701570 149 log.go:172] (0xc000a3d6b0) Data frame received for 5\nI0212 20:27:15.701655 149 log.go:172] (0xc0009da000) (5) Data frame handling\nI0212 20:27:15.701687 149 log.go:172] (0xc0009da000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:27:15.745331 149 log.go:172] (0xc000a3d6b0) Data frame received for 3\nI0212 20:27:15.745385 149 log.go:172] (0xc000b38000) (3) Data frame handling\nI0212 20:27:15.745413 149 log.go:172] (0xc000b38000) (3) Data frame sent\nI0212 20:27:15.812121 149 log.go:172] (0xc000a3d6b0) Data frame received for 1\nI0212 20:27:15.812332 149 log.go:172] (0xc000a3d6b0) (0xc000b38000) Stream removed, broadcasting: 3\nI0212 20:27:15.812466 149 log.go:172] (0xc000a3d6b0) (0xc0009da000) Stream removed, broadcasting: 5\nI0212 20:27:15.812526 149 log.go:172] (0xc0009da820) (1) Data frame handling\nI0212 20:27:15.812577 149 log.go:172] (0xc0009da820) (1) Data frame sent\nI0212 20:27:15.812607 149 log.go:172] (0xc000a3d6b0) (0xc0009da820) Stream removed, broadcasting: 1\nI0212 20:27:15.812643 149 log.go:172] (0xc000a3d6b0) Go away received\nI0212 20:27:15.813679 149 log.go:172] (0xc000a3d6b0) (0xc0009da820) Stream removed, broadcasting: 1\nI0212 20:27:15.813695 149 log.go:172] (0xc000a3d6b0) (0xc000b38000) Stream removed, broadcasting: 3\nI0212 20:27:15.813706 149 log.go:172] (0xc000a3d6b0) (0xc0009da000) Stream removed, broadcasting: 5\n" Feb 12 20:27:15.825: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:27:15.825: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:27:15.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 12 20:27:25.839: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:27:25.839: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:27:25.890: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999251s Feb 12 20:27:26.897: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.976637296s Feb 12 20:27:27.902: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96970918s Feb 12 20:27:28.911: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964280295s Feb 12 20:27:29.918: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955218172s Feb 12 20:27:30.924: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.948063422s Feb 12 20:27:31.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.942774223s Feb 12 20:27:32.934: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.937499115s Feb 12 20:27:33.943: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.931878556s Feb 12 20:27:35.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 923.277766ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2040 Feb 12 20:27:36.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:27:36.863: INFO: stderr: "I0212 20:27:36.635347 173 log.go:172] (0xc000a80dc0) (0xc00063ff40) Create stream\nI0212 20:27:36.636430 173 log.go:172] (0xc000a80dc0) (0xc00063ff40) Stream added, broadcasting: 1\nI0212 20:27:36.651598 173 log.go:172] (0xc000a80dc0) Reply frame received for 1\nI0212 20:27:36.651708 173 log.go:172] (0xc000a80dc0) (0xc000a780a0) Create stream\nI0212 20:27:36.651729 173 log.go:172] (0xc000a80dc0) (0xc000a780a0) Stream added, broadcasting: 3\nI0212 20:27:36.653073 173 log.go:172] (0xc000a80dc0) Reply frame received for 3\nI0212 20:27:36.653098 173 log.go:172] (0xc000a80dc0) (0xc000a4c0a0) Create stream\nI0212 20:27:36.653112 173 log.go:172] (0xc000a80dc0) (0xc000a4c0a0) Stream added, broadcasting: 5\nI0212 20:27:36.654845 173 log.go:172] (0xc000a80dc0) Reply frame received for 5\nI0212 20:27:36.772989 173 log.go:172] (0xc000a80dc0) Data frame received for 5\nI0212 20:27:36.773020 173 log.go:172] (0xc000a4c0a0) (5) Data frame handling\nI0212 20:27:36.773038 173 log.go:172] (0xc000a4c0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 20:27:36.773281 173 log.go:172] (0xc000a80dc0) Data frame received for 3\nI0212 20:27:36.773299 173 log.go:172] (0xc000a780a0) (3) Data frame handling\nI0212 20:27:36.773313 173 log.go:172] (0xc000a780a0) (3) Data frame sent\nI0212 20:27:36.845816 173 log.go:172] (0xc000a80dc0) Data frame received for 1\nI0212 20:27:36.846101 173 log.go:172] (0xc000a80dc0) (0xc000a780a0) Stream removed, broadcasting: 3\nI0212 20:27:36.846227 173 log.go:172] (0xc00063ff40) (1) Data frame handling\nI0212 20:27:36.846269 173 log.go:172] (0xc00063ff40) (1) Data frame sent\nI0212 20:27:36.846320 173 log.go:172] (0xc000a80dc0) (0xc00063ff40) Stream removed, broadcasting: 1\nI0212 20:27:36.846402 173 log.go:172] (0xc000a80dc0) (0xc000a4c0a0) Stream removed, broadcasting: 5\nI0212 20:27:36.846572 173 log.go:172] (0xc000a80dc0) Go away received\nI0212 20:27:36.847709 173 log.go:172] (0xc000a80dc0) (0xc00063ff40) Stream removed, broadcasting: 1\nI0212 20:27:36.847728 173 log.go:172] (0xc000a80dc0) (0xc000a780a0) Stream removed, broadcasting: 3\nI0212 20:27:36.847748 173 log.go:172] (0xc000a80dc0) (0xc000a4c0a0) Stream removed, broadcasting: 5\n" Feb 12 20:27:36.864: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:27:36.864: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:27:36.870: INFO: Found 1 stateful pods, waiting for 3 Feb 12 20:27:46.876: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:27:46.876: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:27:46.876: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 12 20:27:56.881: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:27:56.881: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:27:56.881: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 12 20:27:56.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:27:57.226: INFO: stderr: "I0212 20:27:57.065716 192 log.go:172] (0xc000a380b0) (0xc000a10140) Create stream\nI0212 20:27:57.065928 192 log.go:172] (0xc000a380b0) (0xc000a10140) Stream added, broadcasting: 1\nI0212 20:27:57.070034 192 log.go:172] (0xc000a380b0) Reply frame received for 1\nI0212 20:27:57.070084 192 log.go:172] (0xc000a380b0) (0xc0007840a0) Create stream\nI0212 20:27:57.070105 192 log.go:172] (0xc000a380b0) (0xc0007840a0) Stream added, broadcasting: 3\nI0212 20:27:57.071587 192 log.go:172] (0xc000a380b0) Reply frame received for 3\nI0212 20:27:57.071605 192 log.go:172] (0xc000a380b0) (0xc000a101e0) Create stream\nI0212 20:27:57.071612 192 log.go:172] (0xc000a380b0) (0xc000a101e0) Stream added, broadcasting: 5\nI0212 20:27:57.073329 192 log.go:172] (0xc000a380b0) Reply frame received for 5\nI0212 20:27:57.138012 192 log.go:172] (0xc000a380b0) Data frame received for 3\nI0212 20:27:57.138115 192 log.go:172] (0xc0007840a0) (3) Data frame handling\nI0212 20:27:57.138155 192 log.go:172] (0xc0007840a0) (3) Data frame sent\nI0212 20:27:57.141575 192 log.go:172] (0xc000a380b0) Data frame received for 5\nI0212 20:27:57.141919 192 log.go:172] (0xc000a101e0) (5) Data frame handling\nI0212 20:27:57.142013 192 log.go:172] (0xc000a101e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:27:57.218768 192 log.go:172] (0xc000a380b0) (0xc0007840a0) Stream removed, broadcasting: 3\nI0212 20:27:57.218905 192 log.go:172] (0xc000a380b0) Data frame received for 1\nI0212 20:27:57.218933 192 log.go:172] (0xc000a10140) (1) Data frame handling\nI0212 20:27:57.218960 192 log.go:172] (0xc000a10140) (1) Data frame sent\nI0212 20:27:57.218980 192 log.go:172] (0xc000a380b0) (0xc000a10140) Stream removed, broadcasting: 1\nI0212 20:27:57.219217 192 log.go:172] (0xc000a380b0) (0xc000a101e0) Stream removed, broadcasting: 5\nI0212 20:27:57.219319 192 log.go:172] (0xc000a380b0) Go away received\nI0212 20:27:57.219689 192 log.go:172] (0xc000a380b0) (0xc000a10140) Stream removed, broadcasting: 1\nI0212 20:27:57.219714 192 log.go:172] (0xc000a380b0) (0xc0007840a0) Stream removed, broadcasting: 3\nI0212 20:27:57.219726 192 log.go:172] (0xc000a380b0) (0xc000a101e0) Stream removed, broadcasting: 5\n" Feb 12 20:27:57.226: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:27:57.226: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:27:57.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:27:57.536: INFO: stderr: "I0212 20:27:57.370296 211 log.go:172] (0xc000984000) (0xc0004df4a0) Create stream\nI0212 20:27:57.370384 211 log.go:172] (0xc000984000) (0xc0004df4a0) Stream added, broadcasting: 1\nI0212 20:27:57.373940 211 log.go:172] (0xc000984000) Reply frame received for 1\nI0212 20:27:57.373966 211 log.go:172] (0xc000984000) (0xc00092a000) Create stream\nI0212 20:27:57.373977 211 log.go:172] (0xc000984000) (0xc00092a000) Stream added, broadcasting: 3\nI0212 20:27:57.375192 211 log.go:172] (0xc000984000) Reply frame received for 3\nI0212 20:27:57.375213 211 log.go:172] (0xc000984000) (0xc0009f6000) Create stream\nI0212 20:27:57.375219 211 log.go:172] (0xc000984000) (0xc0009f6000) Stream added, broadcasting: 5\nI0212 20:27:57.376290 211 log.go:172] (0xc000984000) Reply frame received for 5\nI0212 20:27:57.428782 211 log.go:172] (0xc000984000) Data frame received for 5\nI0212 20:27:57.428806 211 log.go:172] (0xc0009f6000) (5) Data frame handling\nI0212 20:27:57.428826 211 log.go:172] (0xc0009f6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:27:57.453162 211 log.go:172] (0xc000984000) Data frame received for 3\nI0212 20:27:57.453191 211 log.go:172] (0xc00092a000) (3) Data frame handling\nI0212 20:27:57.453204 211 log.go:172] (0xc00092a000) (3) Data frame sent\nI0212 20:27:57.529013 211 log.go:172] (0xc000984000) Data frame received for 1\nI0212 20:27:57.529060 211 log.go:172] (0xc000984000) (0xc0009f6000) Stream removed, broadcasting: 5\nI0212 20:27:57.529107 211 log.go:172] (0xc0004df4a0) (1) Data frame handling\nI0212 20:27:57.529122 211 log.go:172] (0xc0004df4a0) (1) Data frame sent\nI0212 20:27:57.529140 211 log.go:172] (0xc000984000) (0xc00092a000) Stream removed, broadcasting: 3\nI0212 20:27:57.529161 211 log.go:172] (0xc000984000) (0xc0004df4a0) Stream removed, broadcasting: 1\nI0212 20:27:57.529298 211 log.go:172] (0xc000984000) Go away received\nI0212 20:27:57.529558 211 log.go:172] (0xc000984000) (0xc0004df4a0) Stream removed, broadcasting: 1\nI0212 20:27:57.529604 211 log.go:172] (0xc000984000) (0xc00092a000) Stream removed, broadcasting: 3\nI0212 20:27:57.529631 211 log.go:172] (0xc000984000) (0xc0009f6000) Stream removed, broadcasting: 5\n" Feb 12 20:27:57.537: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:27:57.537: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:27:57.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:27:58.068: INFO: stderr: "I0212 20:27:57.797685 233 log.go:172] (0xc000a88dc0) (0xc000c083c0) Create stream\nI0212 20:27:57.798354 233 log.go:172] (0xc000a88dc0) (0xc000c083c0) Stream added, broadcasting: 1\nI0212 20:27:57.806742 233 log.go:172] (0xc000a88dc0) Reply frame received for 1\nI0212 20:27:57.806830 233 log.go:172] (0xc000a88dc0) (0xc000a58000) Create stream\nI0212 20:27:57.806844 233 log.go:172] (0xc000a88dc0) (0xc000a58000) Stream added, broadcasting: 3\nI0212 20:27:57.808737 233 log.go:172] (0xc000a88dc0) Reply frame received for 3\nI0212 20:27:57.808762 233 log.go:172] (0xc000a88dc0) (0xc000c08460) Create stream\nI0212 20:27:57.808775 233 log.go:172] (0xc000a88dc0) (0xc000c08460) Stream added, broadcasting: 5\nI0212 20:27:57.811498 233 log.go:172] (0xc000a88dc0) Reply frame received for 5\nI0212 20:27:57.915469 233 log.go:172] (0xc000a88dc0) Data frame received for 5\nI0212 20:27:57.915497 233 log.go:172] (0xc000c08460) (5) Data frame handling\nI0212 20:27:57.915514 233 log.go:172] (0xc000c08460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:27:57.952363 233 log.go:172] (0xc000a88dc0) Data frame received for 3\nI0212 20:27:57.952402 233 log.go:172] (0xc000a58000) (3) Data frame handling\nI0212 20:27:57.952423 233 log.go:172] (0xc000a58000) (3) Data frame sent\nI0212 20:27:58.052596 233 log.go:172] (0xc000a88dc0) Data frame received for 1\nI0212 20:27:58.052711 233 log.go:172] (0xc000c083c0) (1) Data frame handling\nI0212 20:27:58.052750 233 log.go:172] (0xc000c083c0) (1) Data frame sent\nI0212 20:27:58.055004 233 log.go:172] (0xc000a88dc0) (0xc000c083c0) Stream removed, broadcasting: 1\nI0212 20:27:58.055536 233 log.go:172] (0xc000a88dc0) (0xc000a58000) Stream removed, broadcasting: 3\nI0212 20:27:58.056289 233 log.go:172] (0xc000a88dc0) (0xc000c08460) Stream removed, broadcasting: 5\nI0212 20:27:58.056366 233 log.go:172] (0xc000a88dc0) (0xc000c083c0) Stream removed, broadcasting: 1\nI0212 20:27:58.056380 233 log.go:172] (0xc000a88dc0) (0xc000a58000) Stream removed, broadcasting: 3\nI0212 20:27:58.056389 233 log.go:172] (0xc000a88dc0) (0xc000c08460) Stream removed, broadcasting: 5\n" Feb 12 20:27:58.069: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:27:58.069: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:27:58.069: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:27:58.076: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 12 20:28:08.088: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:28:08.088: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:28:08.088: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:28:08.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999592s Feb 12 20:28:09.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984376854s Feb 12 20:28:10.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97765834s Feb 12 20:28:11.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.967007835s Feb 12 20:28:12.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.957466329s Feb 12 20:28:13.158: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94842025s Feb 12 20:28:14.437: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.939708156s Feb 12 20:28:15.445: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.660543086s Feb 12 20:28:16.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.652482175s Feb 12 20:28:17.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 643.046999ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2040 Feb 12 20:28:18.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:28:18.865: INFO: stderr: "I0212 20:28:18.676642 253 log.go:172] (0xc000a8b550) (0xc0008ec6e0) Create stream\nI0212 20:28:18.676772 253 log.go:172] (0xc000a8b550) (0xc0008ec6e0) Stream added, broadcasting: 1\nI0212 20:28:18.683586 253 log.go:172] (0xc000a8b550) Reply frame received for 1\nI0212 20:28:18.683624 253 log.go:172] (0xc000a8b550) (0xc00057a640) Create stream\nI0212 20:28:18.683632 253 log.go:172] (0xc000a8b550) (0xc00057a640) Stream added, broadcasting: 3\nI0212 20:28:18.684930 253 log.go:172] (0xc000a8b550) Reply frame received for 3\nI0212 20:28:18.684954 253 log.go:172] (0xc000a8b550) (0xc0007574a0) Create stream\nI0212 20:28:18.684965 253 log.go:172] (0xc000a8b550) (0xc0007574a0) Stream added, broadcasting: 5\nI0212 20:28:18.686162 253 log.go:172] (0xc000a8b550) Reply frame received for 5\nI0212 20:28:18.778241 253 log.go:172] (0xc000a8b550) Data frame received for 5\nI0212 20:28:18.778287 253 log.go:172] (0xc0007574a0) (5) Data frame handling\nI0212 20:28:18.778319 253 log.go:172] (0xc0007574a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 20:28:18.778414 253 log.go:172] (0xc000a8b550) Data frame received for 3\nI0212 20:28:18.778440 253 log.go:172] (0xc00057a640) (3) Data frame handling\nI0212 20:28:18.778466 253 log.go:172] (0xc00057a640) (3) Data frame sent\nI0212 20:28:18.855037 253 log.go:172] (0xc000a8b550) (0xc00057a640) Stream removed, broadcasting: 3\nI0212 20:28:18.855262 253 log.go:172] (0xc000a8b550) Data frame received for 1\nI0212 20:28:18.855280 253 log.go:172] (0xc0008ec6e0) (1) Data frame handling\nI0212 20:28:18.855300 253 log.go:172] (0xc0008ec6e0) (1) Data frame sent\nI0212 20:28:18.855320 253 log.go:172] (0xc000a8b550) (0xc0008ec6e0) Stream removed, broadcasting: 1\nI0212 20:28:18.855715 253 log.go:172] (0xc000a8b550) (0xc0007574a0) Stream removed, broadcasting: 5\nI0212 20:28:18.855748 253 log.go:172] (0xc000a8b550) (0xc0008ec6e0) Stream removed, broadcasting: 1\nI0212 20:28:18.855756 253 log.go:172] (0xc000a8b550) (0xc00057a640) Stream removed, broadcasting: 3\nI0212 20:28:18.855761 253 log.go:172] (0xc000a8b550) (0xc0007574a0) Stream removed, broadcasting: 5\n" Feb 12 20:28:18.865: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:28:18.865: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:28:18.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:28:19.251: INFO: stderr: "I0212 20:28:19.086626 273 log.go:172] (0xc000abae70) (0xc000ab20a0) Create stream\nI0212 20:28:19.087320 273 log.go:172] (0xc000abae70) (0xc000ab20a0) Stream added, broadcasting: 1\nI0212 20:28:19.092492 273 log.go:172] (0xc000abae70) Reply frame received for 1\nI0212 20:28:19.092723 273 log.go:172] (0xc000abae70) (0xc0008e8000) Create stream\nI0212 20:28:19.092747 273 log.go:172] (0xc000abae70) (0xc0008e8000) Stream added, broadcasting: 3\nI0212 20:28:19.094845 273 log.go:172] (0xc000abae70) Reply frame received for 3\nI0212 20:28:19.094920 273 log.go:172] (0xc000abae70) (0xc000732000) Create stream\nI0212 20:28:19.094940 273 log.go:172] (0xc000abae70) (0xc000732000) Stream added, broadcasting: 5\nI0212 20:28:19.095763 273 log.go:172] (0xc000abae70) Reply frame received for 5\nI0212 20:28:19.166030 273 log.go:172] (0xc000abae70) Data frame received for 5\nI0212 20:28:19.166151 273 log.go:172] (0xc000732000) (5) Data frame handling\nI0212 20:28:19.166180 273 log.go:172] (0xc000732000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 20:28:19.166210 273 log.go:172] (0xc000abae70) Data frame received for 3\nI0212 20:28:19.166223 273 log.go:172] (0xc0008e8000) (3) Data frame handling\nI0212 20:28:19.166238 273 log.go:172] (0xc0008e8000) (3) Data frame sent\nI0212 20:28:19.242173 273 log.go:172] (0xc000abae70) Data frame received for 1\nI0212 20:28:19.242206 273 log.go:172] (0xc000abae70) (0xc0008e8000) Stream removed, broadcasting: 3\nI0212 20:28:19.242264 273 log.go:172] (0xc000ab20a0) (1) Data frame handling\nI0212 20:28:19.242306 273 log.go:172] (0xc000ab20a0) (1) Data frame sent\nI0212 20:28:19.242318 273 log.go:172] (0xc000abae70) (0xc000ab20a0) Stream removed, broadcasting: 1\nI0212 20:28:19.242679 273 log.go:172] (0xc000abae70) (0xc000732000) Stream removed, broadcasting: 5\nI0212 20:28:19.242740 273 log.go:172] (0xc000abae70) (0xc000ab20a0) Stream removed, broadcasting: 1\nI0212 20:28:19.242781 273 log.go:172] (0xc000abae70) (0xc0008e8000) Stream removed, broadcasting: 3\nI0212 20:28:19.242808 273 log.go:172] (0xc000abae70) (0xc000732000) Stream removed, broadcasting: 5\n" Feb 12 20:28:19.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:28:19.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:28:19.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2040 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:28:19.606: INFO: stderr: "I0212 20:28:19.417251 291 log.go:172] (0xc0003c71e0) (0xc0009700a0) Create stream\nI0212 20:28:19.417327 291 log.go:172] (0xc0003c71e0) (0xc0009700a0) Stream added, broadcasting: 1\nI0212 20:28:19.419652 291 log.go:172] (0xc0003c71e0) Reply frame received for 1\nI0212 20:28:19.419678 291 log.go:172] (0xc0003c71e0) (0xc0005b8820) Create stream\nI0212 20:28:19.419687 291 log.go:172] (0xc0003c71e0) (0xc0005b8820) Stream added, broadcasting: 3\nI0212 20:28:19.420663 291 log.go:172] (0xc0003c71e0) Reply frame received for 3\nI0212 20:28:19.420701 291 log.go:172] (0xc0003c71e0) (0xc000657c20) Create stream\nI0212 20:28:19.420732 291 log.go:172] (0xc0003c71e0) (0xc000657c20) Stream added, broadcasting: 5\nI0212 20:28:19.421963 291 log.go:172] (0xc0003c71e0) Reply frame received for 5\nI0212 20:28:19.531397 291 log.go:172] (0xc0003c71e0) Data frame received for 5\nI0212 20:28:19.531674 291 log.go:172] (0xc000657c20) (5) Data frame handling\nI0212 20:28:19.531728 291 log.go:172] (0xc000657c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 20:28:19.531905 291 log.go:172] (0xc0003c71e0) Data frame received for 3\nI0212 20:28:19.531939 291 log.go:172] (0xc0005b8820) (3) Data frame handling\nI0212 20:28:19.531969 291 log.go:172] (0xc0005b8820) (3) Data frame sent\nI0212 20:28:19.597440 291 log.go:172] (0xc0003c71e0) Data frame received for 1\nI0212 20:28:19.597533 291 log.go:172] (0xc0003c71e0) (0xc0005b8820) Stream removed, broadcasting: 3\nI0212 20:28:19.597601 291 log.go:172] (0xc0009700a0) (1) Data frame handling\nI0212 20:28:19.597621 291 log.go:172] (0xc0009700a0) (1) Data frame sent\nI0212 20:28:19.597653 291 log.go:172] (0xc0003c71e0) (0xc000657c20) Stream removed, broadcasting: 5\nI0212 20:28:19.597682 291 log.go:172] (0xc0003c71e0) (0xc0009700a0) Stream removed, broadcasting: 1\nI0212 20:28:19.597694 291 log.go:172] (0xc0003c71e0) Go away received\nI0212 20:28:19.598693 291 log.go:172] (0xc0003c71e0) (0xc0009700a0) Stream removed, broadcasting: 1\nI0212 20:28:19.598712 291 log.go:172] (0xc0003c71e0) (0xc0005b8820) Stream removed, broadcasting: 3\nI0212 20:28:19.598720 291 log.go:172] (0xc0003c71e0) (0xc000657c20) Stream removed, broadcasting: 5\n" Feb 12 20:28:19.606: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:28:19.606: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:28:19.606: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 12 20:28:39.633: INFO: Deleting all statefulset in ns statefulset-2040 Feb 12 20:28:39.639: INFO: Scaling statefulset ss to 0 Feb 12 20:28:39.655: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:28:39.660: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:28:39.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2040" for this suite. • [SLOW TEST:94.418 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":277,"completed":2,"skipped":27,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:28:39.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:29:39.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7379" for this suite. • [SLOW TEST:60.138 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":277,"completed":3,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:29:39.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-a5363fe0-9a22-4cee-8b35-5d9d0e31a62f STEP: Creating the pod STEP: Updating configmap configmap-test-upd-a5363fe0-9a22-4cee-8b35-5d9d0e31a62f STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:29:48.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7564" for this suite. • [SLOW TEST:8.685 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":4,"skipped":71,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:29:48.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-551 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-551 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-551 Feb 12 20:29:49.461: INFO: Found 0 stateful pods, waiting for 1 Feb 12 20:29:59.469: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 12 20:29:59.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:29:59.836: INFO: stderr: "I0212 20:29:59.647993 313 log.go:172] (0xc000a166e0) (0xc000a2e460) Create stream\nI0212 20:29:59.648203 313 log.go:172] (0xc000a166e0) (0xc000a2e460) Stream added, broadcasting: 1\nI0212 20:29:59.665966 313 log.go:172] (0xc000a166e0) Reply frame received for 1\nI0212 20:29:59.666070 313 log.go:172] (0xc000a166e0) (0xc000655d60) Create stream\nI0212 20:29:59.666086 313 log.go:172] (0xc000a166e0) (0xc000655d60) Stream added, broadcasting: 3\nI0212 20:29:59.667683 313 log.go:172] (0xc000a166e0) Reply frame received for 3\nI0212 20:29:59.667727 313 log.go:172] (0xc000a166e0) (0xc0005de960) Create stream\nI0212 20:29:59.667742 313 log.go:172] (0xc000a166e0) (0xc0005de960) Stream added, broadcasting: 5\nI0212 20:29:59.668889 313 log.go:172] (0xc000a166e0) Reply frame received for 5\nI0212 20:29:59.729059 313 log.go:172] (0xc000a166e0) Data frame received for 5\nI0212 20:29:59.729126 313 log.go:172] (0xc0005de960) (5) Data frame handling\nI0212 20:29:59.729160 313 log.go:172] (0xc0005de960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:29:59.755400 313 log.go:172] (0xc000a166e0) Data frame received for 3\nI0212 20:29:59.755441 313 log.go:172] (0xc000655d60) (3) Data frame handling\nI0212 20:29:59.755482 313 log.go:172] (0xc000655d60) (3) Data frame sent\nI0212 20:29:59.828562 313 log.go:172] (0xc000a166e0) (0xc000655d60) Stream removed, broadcasting: 3\nI0212 20:29:59.828735 313 log.go:172] (0xc000a166e0) Data frame received for 1\nI0212 20:29:59.828788 313 log.go:172] (0xc000a166e0) (0xc0005de960) Stream removed, broadcasting: 5\nI0212 20:29:59.828867 313 log.go:172] (0xc000a2e460) (1) Data frame handling\nI0212 20:29:59.828893 313 log.go:172] (0xc000a2e460) (1) Data frame sent\nI0212 20:29:59.828910 313 log.go:172] (0xc000a166e0) (0xc000a2e460) Stream removed, broadcasting: 1\nI0212 20:29:59.828919 313 log.go:172] (0xc000a166e0) Go away received\nI0212 20:29:59.829818 313 log.go:172] (0xc000a166e0) (0xc000a2e460) Stream removed, broadcasting: 1\nI0212 20:29:59.829830 313 log.go:172] (0xc000a166e0) (0xc000655d60) Stream removed, broadcasting: 3\nI0212 20:29:59.829839 313 log.go:172] (0xc000a166e0) (0xc0005de960) Stream removed, broadcasting: 5\n" Feb 12 20:29:59.837: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:29:59.837: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:29:59.883: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 12 20:30:09.902: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:30:09.903: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:30:10.001: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:10.001: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:10.001: INFO: ss-1 Pending [] Feb 12 20:30:10.001: INFO: Feb 12 20:30:10.001: INFO: StatefulSet ss has not reached scale 3, at 2 Feb 12 20:30:11.009: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.976559113s Feb 12 20:30:12.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.969395607s Feb 12 20:30:13.334: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.671842158s Feb 12 20:30:14.430: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.643714505s Feb 12 20:30:16.484: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.548370828s Feb 12 20:30:17.522: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.49447914s Feb 12 20:30:18.559: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.45610085s Feb 12 20:30:19.573: INFO: Verifying statefulset ss doesn't scale past 3 for another 419.345917ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-551 Feb 12 20:30:20.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:30:21.007: INFO: stderr: "I0212 20:30:20.815783 336 log.go:172] (0xc0000f53f0) (0xc0009de000) Create stream\nI0212 20:30:20.816002 336 log.go:172] (0xc0000f53f0) (0xc0009de000) Stream added, broadcasting: 1\nI0212 20:30:20.824126 336 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0212 20:30:20.824261 336 log.go:172] (0xc0000f53f0) (0xc0006d5a40) Create stream\nI0212 20:30:20.824294 336 log.go:172] (0xc0000f53f0) (0xc0006d5a40) Stream added, broadcasting: 3\nI0212 20:30:20.826721 336 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0212 20:30:20.826763 336 log.go:172] (0xc0000f53f0) (0xc0009de0a0) Create stream\nI0212 20:30:20.826788 336 log.go:172] (0xc0000f53f0) (0xc0009de0a0) Stream added, broadcasting: 5\nI0212 20:30:20.829548 336 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0212 20:30:20.917160 336 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0212 20:30:20.917232 336 log.go:172] (0xc0009de0a0) (5) Data frame handling\nI0212 20:30:20.917260 336 log.go:172] (0xc0009de0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 20:30:20.918345 336 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0212 20:30:20.918420 336 log.go:172] (0xc0006d5a40) (3) Data frame handling\nI0212 20:30:20.918457 336 log.go:172] (0xc0006d5a40) (3) Data frame sent\nI0212 20:30:20.996146 336 log.go:172] (0xc0000f53f0) (0xc0006d5a40) Stream removed, broadcasting: 3\nI0212 20:30:20.996398 336 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0212 20:30:20.996429 336 log.go:172] (0xc0009de000) (1) Data frame handling\nI0212 20:30:20.996439 336 log.go:172] (0xc0009de000) (1) Data frame sent\nI0212 20:30:20.996446 336 log.go:172] (0xc0000f53f0) (0xc0009de000) Stream removed, broadcasting: 1\nI0212 20:30:20.996823 336 log.go:172] (0xc0000f53f0) (0xc0009de0a0) Stream removed, broadcasting: 5\nI0212 20:30:20.996859 336 log.go:172] (0xc0000f53f0) (0xc0009de000) Stream removed, broadcasting: 1\nI0212 20:30:20.996865 336 log.go:172] (0xc0000f53f0) (0xc0006d5a40) Stream removed, broadcasting: 3\nI0212 20:30:20.996873 336 log.go:172] (0xc0000f53f0) (0xc0009de0a0) Stream removed, broadcasting: 5\n" Feb 12 20:30:21.007: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:30:21.007: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:30:21.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:30:21.368: INFO: stderr: "I0212 20:30:21.196887 358 log.go:172] (0xc000aee000) (0xc00056d180) Create stream\nI0212 20:30:21.197001 358 log.go:172] (0xc000aee000) (0xc00056d180) Stream added, broadcasting: 1\nI0212 20:30:21.199366 358 log.go:172] (0xc000aee000) Reply frame received for 1\nI0212 20:30:21.199401 358 log.go:172] (0xc000aee000) (0xc000819ea0) Create stream\nI0212 20:30:21.199411 358 log.go:172] (0xc000aee000) (0xc000819ea0) Stream added, broadcasting: 3\nI0212 20:30:21.200431 358 log.go:172] (0xc000aee000) Reply frame received for 3\nI0212 20:30:21.200484 358 log.go:172] (0xc000aee000) (0xc000739680) Create stream\nI0212 20:30:21.200505 358 log.go:172] (0xc000aee000) (0xc000739680) Stream added, broadcasting: 5\nI0212 20:30:21.202001 358 log.go:172] (0xc000aee000) Reply frame received for 5\nI0212 20:30:21.270065 358 log.go:172] (0xc000aee000) Data frame received for 3\nI0212 20:30:21.270146 358 log.go:172] (0xc000819ea0) (3) Data frame handling\nI0212 20:30:21.270178 358 log.go:172] (0xc000819ea0) (3) Data frame sent\nI0212 20:30:21.270226 358 log.go:172] (0xc000aee000) Data frame received for 5\nI0212 20:30:21.270250 358 log.go:172] (0xc000739680) (5) Data frame handling\nI0212 20:30:21.270280 358 log.go:172] (0xc000739680) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0212 20:30:21.353867 358 log.go:172] (0xc000aee000) (0xc000819ea0) Stream removed, broadcasting: 3\nI0212 20:30:21.354020 358 log.go:172] (0xc000aee000) Data frame received for 1\nI0212 20:30:21.354046 358 log.go:172] (0xc000aee000) (0xc000739680) Stream removed, broadcasting: 5\nI0212 20:30:21.354103 358 log.go:172] (0xc00056d180) (1) Data frame handling\nI0212 20:30:21.354129 358 log.go:172] (0xc00056d180) (1) Data frame sent\nI0212 20:30:21.354142 358 log.go:172] (0xc000aee000) (0xc00056d180) Stream removed, broadcasting: 1\nI0212 20:30:21.354155 358 log.go:172] (0xc000aee000) Go away received\nI0212 20:30:21.354762 358 log.go:172] (0xc000aee000) (0xc00056d180) Stream removed, broadcasting: 1\nI0212 20:30:21.354810 358 log.go:172] (0xc000aee000) (0xc000819ea0) Stream removed, broadcasting: 3\nI0212 20:30:21.354825 358 log.go:172] (0xc000aee000) (0xc000739680) Stream removed, broadcasting: 5\n" Feb 12 20:30:21.368: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:30:21.368: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:30:21.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:30:21.660: INFO: stderr: "I0212 20:30:21.488656 381 log.go:172] (0xc000bf4d10) (0xc000c60140) Create stream\nI0212 20:30:21.488781 381 log.go:172] (0xc000bf4d10) (0xc000c60140) Stream added, broadcasting: 1\nI0212 20:30:21.491942 381 log.go:172] (0xc000bf4d10) Reply frame received for 1\nI0212 20:30:21.491964 381 log.go:172] (0xc000bf4d10) (0xc000c601e0) Create stream\nI0212 20:30:21.491970 381 log.go:172] (0xc000bf4d10) (0xc000c601e0) Stream added, broadcasting: 3\nI0212 20:30:21.492955 381 log.go:172] (0xc000bf4d10) Reply frame received for 3\nI0212 20:30:21.492974 381 log.go:172] (0xc000bf4d10) (0xc000c460a0) Create stream\nI0212 20:30:21.492979 381 log.go:172] (0xc000bf4d10) (0xc000c460a0) Stream added, broadcasting: 5\nI0212 20:30:21.494020 381 log.go:172] (0xc000bf4d10) Reply frame received for 5\nI0212 20:30:21.562482 381 log.go:172] (0xc000bf4d10) Data frame received for 5\nI0212 20:30:21.562681 381 log.go:172] (0xc000c460a0) (5) Data frame handling\nI0212 20:30:21.562728 381 log.go:172] (0xc000c460a0) (5) Data frame sent\nI0212 20:30:21.562744 381 log.go:172] (0xc000bf4d10) Data frame received for 5\nI0212 20:30:21.562762 381 log.go:172] (0xc000c460a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0212 20:30:21.562826 381 log.go:172] (0xc000c460a0) (5) Data frame sent\nI0212 20:30:21.563037 381 log.go:172] (0xc000bf4d10) Data frame received for 5\nI0212 20:30:21.563081 381 log.go:172] (0xc000c460a0) (5) Data frame handling\nI0212 20:30:21.563113 381 log.go:172] (0xc000c460a0) (5) Data frame sent\n+ true\nI0212 20:30:21.563203 381 log.go:172] (0xc000bf4d10) Data frame received for 3\nI0212 20:30:21.563229 381 log.go:172] (0xc000c601e0) (3) Data frame handling\nI0212 20:30:21.563254 381 log.go:172] (0xc000c601e0) (3) Data frame sent\nI0212 20:30:21.646202 381 log.go:172] (0xc000bf4d10) Data frame received for 1\nI0212 20:30:21.646283 381 log.go:172] (0xc000c60140) (1) Data frame handling\nI0212 20:30:21.646310 381 log.go:172] (0xc000c60140) (1) Data frame sent\nI0212 20:30:21.646699 381 log.go:172] (0xc000bf4d10) (0xc000c60140) Stream removed, broadcasting: 1\nI0212 20:30:21.646883 381 log.go:172] (0xc000bf4d10) (0xc000c601e0) Stream removed, broadcasting: 3\nI0212 20:30:21.651050 381 log.go:172] (0xc000bf4d10) (0xc000c460a0) Stream removed, broadcasting: 5\nI0212 20:30:21.651214 381 log.go:172] (0xc000bf4d10) (0xc000c60140) Stream removed, broadcasting: 1\nI0212 20:30:21.651238 381 log.go:172] (0xc000bf4d10) (0xc000c601e0) Stream removed, broadcasting: 3\nI0212 20:30:21.651256 381 log.go:172] (0xc000bf4d10) (0xc000c460a0) Stream removed, broadcasting: 5\n" Feb 12 20:30:21.660: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 12 20:30:21.660: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 12 20:30:21.666: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:30:21.666: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 20:30:21.666: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 12 20:30:21.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:30:22.215: INFO: stderr: "I0212 20:30:21.983417 401 log.go:172] (0xc000974000) (0xc000a40000) Create stream\nI0212 20:30:21.983688 401 log.go:172] (0xc000974000) (0xc000a40000) Stream added, broadcasting: 1\nI0212 20:30:21.999719 401 log.go:172] (0xc000974000) Reply frame received for 1\nI0212 20:30:21.999799 401 log.go:172] (0xc000974000) (0xc0005b86e0) Create stream\nI0212 20:30:21.999812 401 log.go:172] (0xc000974000) (0xc0005b86e0) Stream added, broadcasting: 3\nI0212 20:30:22.002019 401 log.go:172] (0xc000974000) Reply frame received for 3\nI0212 20:30:22.002065 401 log.go:172] (0xc000974000) (0xc00015f040) Create stream\nI0212 20:30:22.002078 401 log.go:172] (0xc000974000) (0xc00015f040) Stream added, broadcasting: 5\nI0212 20:30:22.004168 401 log.go:172] (0xc000974000) Reply frame received for 5\nI0212 20:30:22.093552 401 log.go:172] (0xc000974000) Data frame received for 5\nI0212 20:30:22.093672 401 log.go:172] (0xc00015f040) (5) Data frame handling\nI0212 20:30:22.093716 401 log.go:172] (0xc00015f040) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:30:22.093884 401 log.go:172] (0xc000974000) Data frame received for 3\nI0212 20:30:22.093990 401 log.go:172] (0xc0005b86e0) (3) Data frame handling\nI0212 20:30:22.094025 401 log.go:172] (0xc0005b86e0) (3) Data frame sent\nI0212 20:30:22.201750 401 log.go:172] (0xc000974000) (0xc0005b86e0) Stream removed, broadcasting: 3\nI0212 20:30:22.202019 401 log.go:172] (0xc000974000) Data frame received for 1\nI0212 20:30:22.202035 401 log.go:172] (0xc000a40000) (1) Data frame handling\nI0212 20:30:22.202069 401 log.go:172] (0xc000a40000) (1) Data frame sent\nI0212 20:30:22.202145 401 log.go:172] (0xc000974000) (0xc000a40000) Stream removed, broadcasting: 1\nI0212 20:30:22.202352 401 log.go:172] (0xc000974000) (0xc00015f040) Stream removed, broadcasting: 5\nI0212 20:30:22.202770 401 log.go:172] (0xc000974000) Go away received\nI0212 20:30:22.203581 401 log.go:172] (0xc000974000) (0xc000a40000) Stream removed, broadcasting: 1\nI0212 20:30:22.203607 401 log.go:172] (0xc000974000) (0xc0005b86e0) Stream removed, broadcasting: 3\nI0212 20:30:22.203624 401 log.go:172] (0xc000974000) (0xc00015f040) Stream removed, broadcasting: 5\n" Feb 12 20:30:22.216: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:30:22.216: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:30:22.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:30:22.625: INFO: stderr: "I0212 20:30:22.425820 423 log.go:172] (0xc000ada000) (0xc000aa4000) Create stream\nI0212 20:30:22.426007 423 log.go:172] (0xc000ada000) (0xc000aa4000) Stream added, broadcasting: 1\nI0212 20:30:22.429769 423 log.go:172] (0xc000ada000) Reply frame received for 1\nI0212 20:30:22.429816 423 log.go:172] (0xc000ada000) (0xc0009a8000) Create stream\nI0212 20:30:22.429834 423 log.go:172] (0xc000ada000) (0xc0009a8000) Stream added, broadcasting: 3\nI0212 20:30:22.431043 423 log.go:172] (0xc000ada000) Reply frame received for 3\nI0212 20:30:22.431075 423 log.go:172] (0xc000ada000) (0xc000a84000) Create stream\nI0212 20:30:22.431097 423 log.go:172] (0xc000ada000) (0xc000a84000) Stream added, broadcasting: 5\nI0212 20:30:22.432088 423 log.go:172] (0xc000ada000) Reply frame received for 5\nI0212 20:30:22.503432 423 log.go:172] (0xc000ada000) Data frame received for 5\nI0212 20:30:22.503825 423 log.go:172] (0xc000a84000) (5) Data frame handling\nI0212 20:30:22.503923 423 log.go:172] (0xc000a84000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:30:22.524059 423 log.go:172] (0xc000ada000) Data frame received for 3\nI0212 20:30:22.524111 423 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0212 20:30:22.524143 423 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0212 20:30:22.614593 423 log.go:172] (0xc000ada000) Data frame received for 1\nI0212 20:30:22.614715 423 log.go:172] (0xc000ada000) (0xc0009a8000) Stream removed, broadcasting: 3\nI0212 20:30:22.614764 423 log.go:172] (0xc000aa4000) (1) Data frame handling\nI0212 20:30:22.614791 423 log.go:172] (0xc000aa4000) (1) Data frame sent\nI0212 20:30:22.614831 423 log.go:172] (0xc000ada000) (0xc000a84000) Stream removed, broadcasting: 5\nI0212 20:30:22.614867 423 log.go:172] (0xc000ada000) (0xc000aa4000) Stream removed, broadcasting: 1\nI0212 20:30:22.614886 423 log.go:172] (0xc000ada000) Go away received\nI0212 20:30:22.616800 423 log.go:172] (0xc000ada000) (0xc000aa4000) Stream removed, broadcasting: 1\nI0212 20:30:22.617112 423 log.go:172] (0xc000ada000) (0xc0009a8000) Stream removed, broadcasting: 3\nI0212 20:30:22.617151 423 log.go:172] (0xc000ada000) (0xc000a84000) Stream removed, broadcasting: 5\n" Feb 12 20:30:22.625: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:30:22.625: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:30:22.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 12 20:30:23.209: INFO: stderr: "I0212 20:30:22.864606 443 log.go:172] (0xc000a7f550) (0xc000a745a0) Create stream\nI0212 20:30:22.865325 443 log.go:172] (0xc000a7f550) (0xc000a745a0) Stream added, broadcasting: 1\nI0212 20:30:22.885741 443 log.go:172] (0xc000a7f550) Reply frame received for 1\nI0212 20:30:22.886056 443 log.go:172] (0xc000a7f550) (0xc0006508c0) Create stream\nI0212 20:30:22.886104 443 log.go:172] (0xc000a7f550) (0xc0006508c0) Stream added, broadcasting: 3\nI0212 20:30:22.889271 443 log.go:172] (0xc000a7f550) Reply frame received for 3\nI0212 20:30:22.889515 443 log.go:172] (0xc000a7f550) (0xc0003fd5e0) Create stream\nI0212 20:30:22.889588 443 log.go:172] (0xc000a7f550) (0xc0003fd5e0) Stream added, broadcasting: 5\nI0212 20:30:22.895413 443 log.go:172] (0xc000a7f550) Reply frame received for 5\nI0212 20:30:23.014467 443 log.go:172] (0xc000a7f550) Data frame received for 5\nI0212 20:30:23.014658 443 log.go:172] (0xc0003fd5e0) (5) Data frame handling\nI0212 20:30:23.014728 443 log.go:172] (0xc0003fd5e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 20:30:23.058667 443 log.go:172] (0xc000a7f550) Data frame received for 3\nI0212 20:30:23.058942 443 log.go:172] (0xc0006508c0) (3) Data frame handling\nI0212 20:30:23.059074 443 log.go:172] (0xc0006508c0) (3) Data frame sent\nI0212 20:30:23.190981 443 log.go:172] (0xc000a7f550) (0xc0006508c0) Stream removed, broadcasting: 3\nI0212 20:30:23.191614 443 log.go:172] (0xc000a7f550) Data frame received for 1\nI0212 20:30:23.191670 443 log.go:172] (0xc000a745a0) (1) Data frame handling\nI0212 20:30:23.191765 443 log.go:172] (0xc000a745a0) (1) Data frame sent\nI0212 20:30:23.191820 443 log.go:172] (0xc000a7f550) (0xc000a745a0) Stream removed, broadcasting: 1\nI0212 20:30:23.191956 443 log.go:172] (0xc000a7f550) (0xc0003fd5e0) Stream removed, broadcasting: 5\nI0212 20:30:23.192414 443 log.go:172] (0xc000a7f550) Go away received\nI0212 20:30:23.193878 443 log.go:172] (0xc000a7f550) (0xc000a745a0) Stream removed, broadcasting: 1\nI0212 20:30:23.194003 443 log.go:172] (0xc000a7f550) (0xc0006508c0) Stream removed, broadcasting: 3\nI0212 20:30:23.194018 443 log.go:172] (0xc000a7f550) (0xc0003fd5e0) Stream removed, broadcasting: 5\n" Feb 12 20:30:23.210: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 12 20:30:23.210: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 12 20:30:23.210: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:30:23.216: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 12 20:30:33.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:30:33.229: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:30:33.229: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 12 20:30:34.355: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:34.355: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:34.355: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:34.355: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:34.355: INFO: Feb 12 20:30:34.355: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:36.244: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:36.244: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:36.244: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:36.244: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:36.244: INFO: Feb 12 20:30:36.244: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:37.253: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:37.253: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:37.253: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:37.254: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:37.254: INFO: Feb 12 20:30:37.254: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:38.262: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:38.262: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:38.262: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:38.262: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:38.262: INFO: Feb 12 20:30:38.262: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:39.312: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:39.312: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:39.312: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:39.312: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:39.313: INFO: Feb 12 20:30:39.313: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:40.341: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:40.341: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:40.341: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:09 +0000 UTC }] Feb 12 20:30:40.341: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:40.341: INFO: Feb 12 20:30:40.341: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 20:30:41.351: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:41.352: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:41.352: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:41.352: INFO: Feb 12 20:30:41.352: INFO: StatefulSet ss has not reached scale 0, at 2 Feb 12 20:30:42.362: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 20:30:42.362: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:29:49 +0000 UTC }] Feb 12 20:30:42.362: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:23 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 20:30:10 +0000 UTC }] Feb 12 20:30:42.362: INFO: Feb 12 20:30:42.362: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-551 Feb 12 20:30:43.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:30:43.578: INFO: rc: 1 Feb 12 20:30:43.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 12 20:30:53.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:30:53.786: INFO: rc: 1 Feb 12 20:30:53.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:03.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:03.941: INFO: rc: 1 Feb 12 20:31:03.941: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:13.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:14.210: INFO: rc: 1 Feb 12 20:31:14.210: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:24.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:24.318: INFO: rc: 1 Feb 12 20:31:24.318: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:34.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:37.046: INFO: rc: 1 Feb 12 20:31:37.046: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:47.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:47.231: INFO: rc: 1 Feb 12 20:31:47.231: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:31:57.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:31:57.387: INFO: rc: 1 Feb 12 20:31:57.387: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:07.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:07.539: INFO: rc: 1 Feb 12 20:32:07.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:17.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:17.694: INFO: rc: 1 Feb 12 20:32:17.694: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:27.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:27.881: INFO: rc: 1 Feb 12 20:32:27.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:37.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:38.021: INFO: rc: 1 Feb 12 20:32:38.022: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:48.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:48.123: INFO: rc: 1 Feb 12 20:32:48.123: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:32:58.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:32:58.279: INFO: rc: 1 Feb 12 20:32:58.279: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:08.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:08.427: INFO: rc: 1 Feb 12 20:33:08.428: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:18.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:18.597: INFO: rc: 1 Feb 12 20:33:18.597: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:28.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:28.721: INFO: rc: 1 Feb 12 20:33:28.722: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:38.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:38.874: INFO: rc: 1 Feb 12 20:33:38.874: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:48.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:49.010: INFO: rc: 1 Feb 12 20:33:49.011: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:33:59.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:33:59.154: INFO: rc: 1 Feb 12 20:33:59.155: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:09.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:34:09.290: INFO: rc: 1 Feb 12 20:34:09.291: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:19.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:34:19.505: INFO: rc: 1 Feb 12 20:34:19.506: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:29.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:34:29.678: INFO: rc: 1 Feb 12 20:34:29.679: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:39.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:34:39.843: INFO: rc: 1 Feb 12 20:34:39.843: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:49.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:34:49.978: INFO: rc: 1 Feb 12 20:34:49.978: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:34:59.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:00.161: INFO: rc: 1 Feb 12 20:35:00.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:35:10.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:10.349: INFO: rc: 1 Feb 12 20:35:10.349: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:35:20.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:20.473: INFO: rc: 1 Feb 12 20:35:20.473: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:35:30.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:30.623: INFO: rc: 1 Feb 12 20:35:30.623: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:35:40.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:40.761: INFO: rc: 1 Feb 12 20:35:40.761: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 20:35:50.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-551 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 12 20:35:50.894: INFO: rc: 1 Feb 12 20:35:50.894: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Feb 12 20:35:50.894: INFO: Scaling statefulset ss to 0 Feb 12 20:35:50.913: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 12 20:35:50.915: INFO: Deleting all statefulset in ns statefulset-551 Feb 12 20:35:50.918: INFO: Scaling statefulset ss to 0 Feb 12 20:35:50.926: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 20:35:50.932: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:35:50.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-551" for this suite. • [SLOW TEST:362.440 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":277,"completed":5,"skipped":91,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:35:50.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-8ab400bb-24ba-40c8-bd2c-d87b58bf683b in namespace container-probe-5211 Feb 12 20:35:57.066: INFO: Started pod liveness-8ab400bb-24ba-40c8-bd2c-d87b58bf683b in namespace container-probe-5211 STEP: checking the pod's current state and verifying that restartCount is present Feb 12 20:35:57.070: INFO: Initial restart count of pod liveness-8ab400bb-24ba-40c8-bd2c-d87b58bf683b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:39:58.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5211" for this suite. • [SLOW TEST:248.040 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":277,"completed":6,"skipped":95,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:39:59.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:39:59.267: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-763 I0212 20:39:59.289203 9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-763, replica count: 1 I0212 20:40:00.339968 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:01.340426 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:02.340806 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:03.341138 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:04.341383 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:05.341602 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:06.342468 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:07.343006 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:40:08.343333 9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 20:40:08.465: INFO: Created: latency-svc-hxs64 Feb 12 20:40:08.503: INFO: Got endpoints: latency-svc-hxs64 [59.691854ms] Feb 12 20:40:08.585: INFO: Created: latency-svc-q8qgm Feb 12 20:40:08.592: INFO: Got endpoints: latency-svc-q8qgm [88.486558ms] Feb 12 20:40:08.616: INFO: Created: latency-svc-l89ds Feb 12 20:40:08.635: INFO: Got endpoints: latency-svc-l89ds [129.78677ms] Feb 12 20:40:08.667: INFO: Created: latency-svc-xcs89 Feb 12 20:40:08.746: INFO: Got endpoints: latency-svc-xcs89 [241.510765ms] Feb 12 20:40:08.751: INFO: Created: latency-svc-m7tw9 Feb 12 20:40:08.756: INFO: Got endpoints: latency-svc-m7tw9 [250.206357ms] Feb 12 20:40:08.813: INFO: Created: latency-svc-z8b95 Feb 12 20:40:08.815: INFO: Got endpoints: latency-svc-z8b95 [312.035198ms] Feb 12 20:40:08.896: INFO: Created: latency-svc-vx7qr Feb 12 20:40:08.915: INFO: Got endpoints: latency-svc-vx7qr [410.07304ms] Feb 12 20:40:08.918: INFO: Created: latency-svc-wxkgp Feb 12 20:40:08.923: INFO: Got endpoints: latency-svc-wxkgp [417.699965ms] Feb 12 20:40:08.951: INFO: Created: latency-svc-hgl2g Feb 12 20:40:08.967: INFO: Created: latency-svc-mhhq5 Feb 12 20:40:08.967: INFO: Got endpoints: latency-svc-hgl2g [461.542368ms] Feb 12 20:40:08.995: INFO: Created: latency-svc-lhwrj Feb 12 20:40:08.998: INFO: Got endpoints: latency-svc-mhhq5 [492.400068ms] Feb 12 20:40:09.091: INFO: Got endpoints: latency-svc-lhwrj [587.60531ms] Feb 12 20:40:09.108: INFO: Created: latency-svc-mns2f Feb 12 20:40:09.127: INFO: Got endpoints: latency-svc-mns2f [621.795685ms] Feb 12 20:40:09.143: INFO: Created: latency-svc-jv4sx Feb 12 20:40:09.169: INFO: Got endpoints: latency-svc-jv4sx [664.650768ms] Feb 12 20:40:09.255: INFO: Created: latency-svc-7rsng Feb 12 20:40:09.260: INFO: Got endpoints: latency-svc-7rsng [754.571118ms] Feb 12 20:40:09.294: INFO: Created: latency-svc-f7c9v Feb 12 20:40:09.307: INFO: Got endpoints: latency-svc-f7c9v [803.205708ms] Feb 12 20:40:09.327: INFO: Created: latency-svc-v56v5 Feb 12 20:40:09.339: INFO: Got endpoints: latency-svc-v56v5 [834.334372ms] Feb 12 20:40:09.423: INFO: Created: latency-svc-56xjl Feb 12 20:40:09.432: INFO: Got endpoints: latency-svc-56xjl [839.490451ms] Feb 12 20:40:09.472: INFO: Created: latency-svc-8fbvs Feb 12 20:40:09.513: INFO: Got endpoints: latency-svc-8fbvs [877.705906ms] Feb 12 20:40:09.521: INFO: Created: latency-svc-jr5qc Feb 12 20:40:09.559: INFO: Got endpoints: latency-svc-jr5qc [812.843603ms] Feb 12 20:40:09.564: INFO: Created: latency-svc-qrmqg Feb 12 20:40:09.619: INFO: Got endpoints: latency-svc-qrmqg [863.662873ms] Feb 12 20:40:09.622: INFO: Created: latency-svc-nh8v5 Feb 12 20:40:09.718: INFO: Got endpoints: latency-svc-nh8v5 [902.406449ms] Feb 12 20:40:09.738: INFO: Created: latency-svc-ssc45 Feb 12 20:40:09.742: INFO: Got endpoints: latency-svc-ssc45 [827.05047ms] Feb 12 20:40:09.765: INFO: Created: latency-svc-jr7s4 Feb 12 20:40:09.770: INFO: Got endpoints: latency-svc-jr7s4 [846.863045ms] Feb 12 20:40:09.793: INFO: Created: latency-svc-vqpjw Feb 12 20:40:09.799: INFO: Got endpoints: latency-svc-vqpjw [831.991771ms] Feb 12 20:40:09.814: INFO: Created: latency-svc-d92d8 Feb 12 20:40:09.856: INFO: Got endpoints: latency-svc-d92d8 [857.771411ms] Feb 12 20:40:09.862: INFO: Created: latency-svc-t22st Feb 12 20:40:09.890: INFO: Got endpoints: latency-svc-t22st [798.677985ms] Feb 12 20:40:09.894: INFO: Created: latency-svc-vph6t Feb 12 20:40:09.905: INFO: Got endpoints: latency-svc-vph6t [778.042555ms] Feb 12 20:40:09.950: INFO: Created: latency-svc-vj4n4 Feb 12 20:40:10.003: INFO: Got endpoints: latency-svc-vj4n4 [833.880671ms] Feb 12 20:40:10.062: INFO: Created: latency-svc-jv9vn Feb 12 20:40:10.078: INFO: Got endpoints: latency-svc-jv9vn [817.579737ms] Feb 12 20:40:10.096: INFO: Created: latency-svc-bzkb4 Feb 12 20:40:10.195: INFO: Got endpoints: latency-svc-bzkb4 [886.506074ms] Feb 12 20:40:10.208: INFO: Created: latency-svc-twdn8 Feb 12 20:40:10.222: INFO: Got endpoints: latency-svc-twdn8 [883.518863ms] Feb 12 20:40:10.283: INFO: Created: latency-svc-hdqj5 Feb 12 20:40:10.348: INFO: Got endpoints: latency-svc-hdqj5 [916.00471ms] Feb 12 20:40:10.388: INFO: Created: latency-svc-jddzz Feb 12 20:40:10.428: INFO: Got endpoints: latency-svc-jddzz [914.539497ms] Feb 12 20:40:10.430: INFO: Created: latency-svc-8zxbm Feb 12 20:40:10.545: INFO: Got endpoints: latency-svc-8zxbm [985.300885ms] Feb 12 20:40:10.630: INFO: Created: latency-svc-clw8g Feb 12 20:40:10.637: INFO: Got endpoints: latency-svc-clw8g [1.017491619s] Feb 12 20:40:10.864: INFO: Created: latency-svc-85w65 Feb 12 20:40:10.891: INFO: Got endpoints: latency-svc-85w65 [1.172792027s] Feb 12 20:40:10.926: INFO: Created: latency-svc-trdd2 Feb 12 20:40:10.936: INFO: Got endpoints: latency-svc-trdd2 [1.194166562s] Feb 12 20:40:10.957: INFO: Created: latency-svc-b2g6r Feb 12 20:40:11.046: INFO: Got endpoints: latency-svc-b2g6r [1.276855802s] Feb 12 20:40:11.057: INFO: Created: latency-svc-phfz8 Feb 12 20:40:11.076: INFO: Got endpoints: latency-svc-phfz8 [1.276444266s] Feb 12 20:40:11.110: INFO: Created: latency-svc-lb84q Feb 12 20:40:11.193: INFO: Created: latency-svc-4svlb Feb 12 20:40:11.193: INFO: Got endpoints: latency-svc-lb84q [1.337579993s] Feb 12 20:40:11.223: INFO: Got endpoints: latency-svc-4svlb [1.333349559s] Feb 12 20:40:11.938: INFO: Created: latency-svc-jjg9w Feb 12 20:40:11.938: INFO: Got endpoints: latency-svc-jjg9w [2.032370274s] Feb 12 20:40:12.217: INFO: Created: latency-svc-ngchq Feb 12 20:40:12.224: INFO: Got endpoints: latency-svc-ngchq [2.220498127s] Feb 12 20:40:12.273: INFO: Created: latency-svc-fdctr Feb 12 20:40:12.298: INFO: Got endpoints: latency-svc-fdctr [2.220616394s] Feb 12 20:40:12.408: INFO: Created: latency-svc-8gb2m Feb 12 20:40:12.417: INFO: Got endpoints: latency-svc-8gb2m [2.222525466s] Feb 12 20:40:12.485: INFO: Created: latency-svc-mg6vt Feb 12 20:40:12.489: INFO: Got endpoints: latency-svc-mg6vt [2.265946542s] Feb 12 20:40:12.554: INFO: Created: latency-svc-vtmsb Feb 12 20:40:12.585: INFO: Got endpoints: latency-svc-vtmsb [2.23708474s] Feb 12 20:40:12.605: INFO: Created: latency-svc-kt4vq Feb 12 20:40:12.617: INFO: Got endpoints: latency-svc-kt4vq [2.189181354s] Feb 12 20:40:12.740: INFO: Created: latency-svc-9ng98 Feb 12 20:40:12.754: INFO: Got endpoints: latency-svc-9ng98 [2.209099235s] Feb 12 20:40:12.840: INFO: Created: latency-svc-d8865 Feb 12 20:40:12.911: INFO: Got endpoints: latency-svc-d8865 [2.274121452s] Feb 12 20:40:12.928: INFO: Created: latency-svc-2ww68 Feb 12 20:40:12.941: INFO: Got endpoints: latency-svc-2ww68 [2.049520759s] Feb 12 20:40:13.106: INFO: Created: latency-svc-k8wkn Feb 12 20:40:13.149: INFO: Got endpoints: latency-svc-k8wkn [2.211934315s] Feb 12 20:40:13.150: INFO: Created: latency-svc-2trsw Feb 12 20:40:13.184: INFO: Got endpoints: latency-svc-2trsw [2.137717542s] Feb 12 20:40:13.189: INFO: Created: latency-svc-kspzf Feb 12 20:40:13.259: INFO: Got endpoints: latency-svc-kspzf [2.183162635s] Feb 12 20:40:13.266: INFO: Created: latency-svc-5gk6w Feb 12 20:40:13.291: INFO: Got endpoints: latency-svc-5gk6w [2.097525625s] Feb 12 20:40:13.294: INFO: Created: latency-svc-4vmw6 Feb 12 20:40:13.318: INFO: Got endpoints: latency-svc-4vmw6 [2.095052209s] Feb 12 20:40:13.342: INFO: Created: latency-svc-h4rcf Feb 12 20:40:13.446: INFO: Got endpoints: latency-svc-h4rcf [1.508030311s] Feb 12 20:40:13.451: INFO: Created: latency-svc-vrnht Feb 12 20:40:13.495: INFO: Got endpoints: latency-svc-vrnht [1.271253593s] Feb 12 20:40:13.501: INFO: Created: latency-svc-hwmg2 Feb 12 20:40:13.504: INFO: Got endpoints: latency-svc-hwmg2 [1.205218674s] Feb 12 20:40:13.535: INFO: Created: latency-svc-zp797 Feb 12 20:40:13.539: INFO: Got endpoints: latency-svc-zp797 [1.121596309s] Feb 12 20:40:13.605: INFO: Created: latency-svc-p9j9c Feb 12 20:40:13.618: INFO: Got endpoints: latency-svc-p9j9c [1.12894464s] Feb 12 20:40:13.640: INFO: Created: latency-svc-s7mzj Feb 12 20:40:13.662: INFO: Created: latency-svc-nflsp Feb 12 20:40:13.665: INFO: Got endpoints: latency-svc-s7mzj [1.080471432s] Feb 12 20:40:13.673: INFO: Got endpoints: latency-svc-nflsp [1.055581077s] Feb 12 20:40:13.786: INFO: Created: latency-svc-s5xbn Feb 12 20:40:13.832: INFO: Got endpoints: latency-svc-s5xbn [1.078334105s] Feb 12 20:40:13.838: INFO: Created: latency-svc-drpm6 Feb 12 20:40:13.856: INFO: Got endpoints: latency-svc-drpm6 [945.072876ms] Feb 12 20:40:13.950: INFO: Created: latency-svc-6pft6 Feb 12 20:40:13.963: INFO: Got endpoints: latency-svc-6pft6 [1.021933934s] Feb 12 20:40:13.965: INFO: Created: latency-svc-ds8dd Feb 12 20:40:13.977: INFO: Got endpoints: latency-svc-ds8dd [828.161378ms] Feb 12 20:40:14.004: INFO: Created: latency-svc-sspj2 Feb 12 20:40:14.013: INFO: Got endpoints: latency-svc-sspj2 [828.112351ms] Feb 12 20:40:14.049: INFO: Created: latency-svc-nglv4 Feb 12 20:40:14.119: INFO: Got endpoints: latency-svc-nglv4 [859.677498ms] Feb 12 20:40:14.137: INFO: Created: latency-svc-wj6hh Feb 12 20:40:14.142: INFO: Got endpoints: latency-svc-wj6hh [851.21836ms] Feb 12 20:40:14.171: INFO: Created: latency-svc-cvblp Feb 12 20:40:14.178: INFO: Got endpoints: latency-svc-cvblp [859.518578ms] Feb 12 20:40:14.206: INFO: Created: latency-svc-hnwh5 Feb 12 20:40:14.260: INFO: Created: latency-svc-7wqs8 Feb 12 20:40:14.261: INFO: Got endpoints: latency-svc-hnwh5 [814.825684ms] Feb 12 20:40:14.268: INFO: Got endpoints: latency-svc-7wqs8 [772.627875ms] Feb 12 20:40:14.287: INFO: Created: latency-svc-w52xf Feb 12 20:40:14.331: INFO: Got endpoints: latency-svc-w52xf [827.21212ms] Feb 12 20:40:14.398: INFO: Created: latency-svc-qckwt Feb 12 20:40:14.401: INFO: Got endpoints: latency-svc-qckwt [862.007101ms] Feb 12 20:40:14.460: INFO: Created: latency-svc-l26ff Feb 12 20:40:14.475: INFO: Got endpoints: latency-svc-l26ff [856.677733ms] Feb 12 20:40:14.541: INFO: Created: latency-svc-2nc6d Feb 12 20:40:14.565: INFO: Got endpoints: latency-svc-2nc6d [898.93742ms] Feb 12 20:40:14.567: INFO: Created: latency-svc-cvc8l Feb 12 20:40:14.576: INFO: Got endpoints: latency-svc-cvc8l [903.581076ms] Feb 12 20:40:14.595: INFO: Created: latency-svc-hpd8j Feb 12 20:40:14.613: INFO: Got endpoints: latency-svc-hpd8j [779.954537ms] Feb 12 20:40:14.630: INFO: Created: latency-svc-bq6k4 Feb 12 20:40:14.672: INFO: Got endpoints: latency-svc-bq6k4 [815.770497ms] Feb 12 20:40:14.696: INFO: Created: latency-svc-w2nqg Feb 12 20:40:14.703: INFO: Got endpoints: latency-svc-w2nqg [740.295845ms] Feb 12 20:40:14.726: INFO: Created: latency-svc-chltf Feb 12 20:40:14.732: INFO: Got endpoints: latency-svc-chltf [755.421523ms] Feb 12 20:40:14.749: INFO: Created: latency-svc-b5f4v Feb 12 20:40:14.767: INFO: Got endpoints: latency-svc-b5f4v [753.768179ms] Feb 12 20:40:14.771: INFO: Created: latency-svc-wsh46 Feb 12 20:40:14.809: INFO: Got endpoints: latency-svc-wsh46 [689.336381ms] Feb 12 20:40:14.817: INFO: Created: latency-svc-txstp Feb 12 20:40:14.841: INFO: Got endpoints: latency-svc-txstp [698.626484ms] Feb 12 20:40:14.848: INFO: Created: latency-svc-f5mhv Feb 12 20:40:14.860: INFO: Got endpoints: latency-svc-f5mhv [681.489256ms] Feb 12 20:40:14.880: INFO: Created: latency-svc-nvdfk Feb 12 20:40:14.940: INFO: Created: latency-svc-tlgcs Feb 12 20:40:14.940: INFO: Got endpoints: latency-svc-nvdfk [678.758427ms] Feb 12 20:40:14.967: INFO: Got endpoints: latency-svc-tlgcs [699.138175ms] Feb 12 20:40:14.971: INFO: Created: latency-svc-gmhhd Feb 12 20:40:14.982: INFO: Got endpoints: latency-svc-gmhhd [650.883965ms] Feb 12 20:40:15.012: INFO: Created: latency-svc-s6btp Feb 12 20:40:15.035: INFO: Got endpoints: latency-svc-s6btp [634.253379ms] Feb 12 20:40:15.035: INFO: Created: latency-svc-tkhpt Feb 12 20:40:15.109: INFO: Got endpoints: latency-svc-tkhpt [633.82092ms] Feb 12 20:40:15.148: INFO: Created: latency-svc-t6rgl Feb 12 20:40:15.152: INFO: Got endpoints: latency-svc-t6rgl [586.926917ms] Feb 12 20:40:15.294: INFO: Created: latency-svc-fhdv2 Feb 12 20:40:15.305: INFO: Got endpoints: latency-svc-fhdv2 [728.605729ms] Feb 12 20:40:15.320: INFO: Created: latency-svc-7kkmr Feb 12 20:40:15.323: INFO: Got endpoints: latency-svc-7kkmr [710.37654ms] Feb 12 20:40:15.342: INFO: Created: latency-svc-qz9bx Feb 12 20:40:15.348: INFO: Got endpoints: latency-svc-qz9bx [675.254276ms] Feb 12 20:40:15.445: INFO: Created: latency-svc-f6xth Feb 12 20:40:15.448: INFO: Got endpoints: latency-svc-f6xth [744.335839ms] Feb 12 20:40:15.485: INFO: Created: latency-svc-2fldv Feb 12 20:40:15.501: INFO: Created: latency-svc-74hbp Feb 12 20:40:15.503: INFO: Got endpoints: latency-svc-2fldv [770.031186ms] Feb 12 20:40:15.503: INFO: Got endpoints: latency-svc-74hbp [736.706199ms] Feb 12 20:40:15.579: INFO: Created: latency-svc-nqtsd Feb 12 20:40:15.594: INFO: Created: latency-svc-jwqb7 Feb 12 20:40:15.596: INFO: Got endpoints: latency-svc-nqtsd [787.597922ms] Feb 12 20:40:15.611: INFO: Got endpoints: latency-svc-jwqb7 [769.772355ms] Feb 12 20:40:15.614: INFO: Created: latency-svc-hpkpp Feb 12 20:40:15.619: INFO: Got endpoints: latency-svc-hpkpp [759.311615ms] Feb 12 20:40:15.637: INFO: Created: latency-svc-lq7ps Feb 12 20:40:15.643: INFO: Got endpoints: latency-svc-lq7ps [46.649783ms] Feb 12 20:40:15.662: INFO: Created: latency-svc-c7rr5 Feb 12 20:40:15.672: INFO: Got endpoints: latency-svc-c7rr5 [731.751523ms] Feb 12 20:40:15.783: INFO: Created: latency-svc-85q6x Feb 12 20:40:15.787: INFO: Got endpoints: latency-svc-85q6x [820.111502ms] Feb 12 20:40:15.815: INFO: Created: latency-svc-rqkss Feb 12 20:40:15.827: INFO: Got endpoints: latency-svc-rqkss [844.347051ms] Feb 12 20:40:15.850: INFO: Created: latency-svc-tntfv Feb 12 20:40:15.940: INFO: Got endpoints: latency-svc-tntfv [904.729266ms] Feb 12 20:40:15.948: INFO: Created: latency-svc-j4hsb Feb 12 20:40:15.950: INFO: Got endpoints: latency-svc-j4hsb [840.488755ms] Feb 12 20:40:15.973: INFO: Created: latency-svc-x5z9s Feb 12 20:40:15.992: INFO: Got endpoints: latency-svc-x5z9s [840.162473ms] Feb 12 20:40:16.010: INFO: Created: latency-svc-9cp5v Feb 12 20:40:16.013: INFO: Got endpoints: latency-svc-9cp5v [707.726531ms] Feb 12 20:40:16.127: INFO: Created: latency-svc-kslxn Feb 12 20:40:16.130: INFO: Created: latency-svc-gftf5 Feb 12 20:40:16.150: INFO: Got endpoints: latency-svc-gftf5 [802.567517ms] Feb 12 20:40:16.151: INFO: Got endpoints: latency-svc-kslxn [827.762253ms] Feb 12 20:40:16.223: INFO: Created: latency-svc-jrr6m Feb 12 20:40:16.287: INFO: Got endpoints: latency-svc-jrr6m [839.460398ms] Feb 12 20:40:16.309: INFO: Created: latency-svc-5qfgt Feb 12 20:40:16.311: INFO: Got endpoints: latency-svc-5qfgt [808.421162ms] Feb 12 20:40:16.376: INFO: Created: latency-svc-bg2mj Feb 12 20:40:16.385: INFO: Got endpoints: latency-svc-bg2mj [881.625608ms] Feb 12 20:40:16.468: INFO: Created: latency-svc-hllr5 Feb 12 20:40:16.474: INFO: Got endpoints: latency-svc-hllr5 [862.842196ms] Feb 12 20:40:16.508: INFO: Created: latency-svc-km4gs Feb 12 20:40:16.515: INFO: Got endpoints: latency-svc-km4gs [895.449667ms] Feb 12 20:40:16.541: INFO: Created: latency-svc-8k4qk Feb 12 20:40:16.553: INFO: Got endpoints: latency-svc-8k4qk [909.555076ms] Feb 12 20:40:16.612: INFO: Created: latency-svc-88g8p Feb 12 20:40:16.614: INFO: Got endpoints: latency-svc-88g8p [941.609018ms] Feb 12 20:40:16.655: INFO: Created: latency-svc-7h7mm Feb 12 20:40:16.669: INFO: Got endpoints: latency-svc-7h7mm [881.085365ms] Feb 12 20:40:16.685: INFO: Created: latency-svc-2xptc Feb 12 20:40:16.687: INFO: Got endpoints: latency-svc-2xptc [860.550021ms] Feb 12 20:40:16.746: INFO: Created: latency-svc-b4j7x Feb 12 20:40:16.758: INFO: Created: latency-svc-pbqql Feb 12 20:40:16.758: INFO: Got endpoints: latency-svc-b4j7x [817.867679ms] Feb 12 20:40:16.780: INFO: Got endpoints: latency-svc-pbqql [830.754446ms] Feb 12 20:40:16.801: INFO: Created: latency-svc-d4c8w Feb 12 20:40:16.808: INFO: Got endpoints: latency-svc-d4c8w [815.674608ms] Feb 12 20:40:16.830: INFO: Created: latency-svc-q7v2g Feb 12 20:40:16.879: INFO: Got endpoints: latency-svc-q7v2g [865.843484ms] Feb 12 20:40:16.892: INFO: Created: latency-svc-zr6l6 Feb 12 20:40:16.937: INFO: Got endpoints: latency-svc-zr6l6 [785.955533ms] Feb 12 20:40:16.961: INFO: Created: latency-svc-n55xz Feb 12 20:40:16.963: INFO: Got endpoints: latency-svc-n55xz [812.795821ms] Feb 12 20:40:17.018: INFO: Created: latency-svc-4bvvm Feb 12 20:40:17.024: INFO: Got endpoints: latency-svc-4bvvm [737.330826ms] Feb 12 20:40:17.094: INFO: Created: latency-svc-fmx94 Feb 12 20:40:17.103: INFO: Got endpoints: latency-svc-fmx94 [791.575555ms] Feb 12 20:40:17.144: INFO: Created: latency-svc-9cnxx Feb 12 20:40:17.149: INFO: Got endpoints: latency-svc-9cnxx [764.074939ms] Feb 12 20:40:17.180: INFO: Created: latency-svc-pz7js Feb 12 20:40:17.184: INFO: Got endpoints: latency-svc-pz7js [709.545938ms] Feb 12 20:40:17.223: INFO: Created: latency-svc-ccw78 Feb 12 20:40:17.312: INFO: Got endpoints: latency-svc-ccw78 [797.078984ms] Feb 12 20:40:17.316: INFO: Created: latency-svc-f6gms Feb 12 20:40:17.318: INFO: Got endpoints: latency-svc-f6gms [765.694555ms] Feb 12 20:40:17.351: INFO: Created: latency-svc-wc72f Feb 12 20:40:17.374: INFO: Got endpoints: latency-svc-wc72f [760.202272ms] Feb 12 20:40:17.379: INFO: Created: latency-svc-8m2qk Feb 12 20:40:17.387: INFO: Got endpoints: latency-svc-8m2qk [717.860142ms] Feb 12 20:40:17.407: INFO: Created: latency-svc-jcxqg Feb 12 20:40:17.470: INFO: Got endpoints: latency-svc-jcxqg [783.11071ms] Feb 12 20:40:17.485: INFO: Created: latency-svc-vqdgv Feb 12 20:40:17.494: INFO: Got endpoints: latency-svc-vqdgv [735.18884ms] Feb 12 20:40:17.534: INFO: Created: latency-svc-7tzjt Feb 12 20:40:17.534: INFO: Got endpoints: latency-svc-7tzjt [753.55697ms] Feb 12 20:40:17.552: INFO: Created: latency-svc-jtfrf Feb 12 20:40:17.556: INFO: Got endpoints: latency-svc-jtfrf [748.319306ms] Feb 12 20:40:17.613: INFO: Created: latency-svc-7tdrt Feb 12 20:40:17.618: INFO: Got endpoints: latency-svc-7tdrt [738.628714ms] Feb 12 20:40:17.647: INFO: Created: latency-svc-nqkwx Feb 12 20:40:17.650: INFO: Got endpoints: latency-svc-nqkwx [712.676732ms] Feb 12 20:40:17.671: INFO: Created: latency-svc-bknkp Feb 12 20:40:17.695: INFO: Got endpoints: latency-svc-bknkp [731.268495ms] Feb 12 20:40:17.696: INFO: Created: latency-svc-4njvr Feb 12 20:40:17.699: INFO: Got endpoints: latency-svc-4njvr [674.717751ms] Feb 12 20:40:17.784: INFO: Created: latency-svc-hv85s Feb 12 20:40:17.807: INFO: Created: latency-svc-xj77k Feb 12 20:40:17.809: INFO: Got endpoints: latency-svc-hv85s [706.478425ms] Feb 12 20:40:17.828: INFO: Got endpoints: latency-svc-xj77k [678.482833ms] Feb 12 20:40:17.856: INFO: Created: latency-svc-bg7ps Feb 12 20:40:17.864: INFO: Got endpoints: latency-svc-bg7ps [680.611884ms] Feb 12 20:40:17.933: INFO: Created: latency-svc-kx6xf Feb 12 20:40:17.936: INFO: Got endpoints: latency-svc-kx6xf [623.663849ms] Feb 12 20:40:17.994: INFO: Created: latency-svc-pbvdz Feb 12 20:40:17.997: INFO: Got endpoints: latency-svc-pbvdz [678.901044ms] Feb 12 20:40:18.024: INFO: Created: latency-svc-z8j6t Feb 12 20:40:18.091: INFO: Got endpoints: latency-svc-z8j6t [717.291808ms] Feb 12 20:40:18.104: INFO: Created: latency-svc-ptxrh Feb 12 20:40:18.113: INFO: Got endpoints: latency-svc-ptxrh [726.42231ms] Feb 12 20:40:18.638: INFO: Created: latency-svc-rcvl8 Feb 12 20:40:19.116: INFO: Got endpoints: latency-svc-rcvl8 [1.645249654s] Feb 12 20:40:19.135: INFO: Created: latency-svc-ntt4x Feb 12 20:40:19.139: INFO: Got endpoints: latency-svc-ntt4x [1.645581394s] Feb 12 20:40:19.160: INFO: Created: latency-svc-h6kc9 Feb 12 20:40:19.194: INFO: Got endpoints: latency-svc-h6kc9 [1.659815476s] Feb 12 20:40:19.268: INFO: Created: latency-svc-s876w Feb 12 20:40:19.268: INFO: Got endpoints: latency-svc-s876w [1.711765531s] Feb 12 20:40:19.296: INFO: Created: latency-svc-sffrl Feb 12 20:40:19.301: INFO: Got endpoints: latency-svc-sffrl [1.682875626s] Feb 12 20:40:19.321: INFO: Created: latency-svc-b6n7m Feb 12 20:40:19.328: INFO: Got endpoints: latency-svc-b6n7m [1.678113355s] Feb 12 20:40:19.348: INFO: Created: latency-svc-r2ctl Feb 12 20:40:19.401: INFO: Got endpoints: latency-svc-r2ctl [1.70655965s] Feb 12 20:40:19.405: INFO: Created: latency-svc-h2c4h Feb 12 20:40:19.412: INFO: Got endpoints: latency-svc-h2c4h [1.712334026s] Feb 12 20:40:19.433: INFO: Created: latency-svc-88zlk Feb 12 20:40:19.437: INFO: Got endpoints: latency-svc-88zlk [1.627844562s] Feb 12 20:40:19.462: INFO: Created: latency-svc-klctj Feb 12 20:40:19.466: INFO: Got endpoints: latency-svc-klctj [1.638031583s] Feb 12 20:40:19.502: INFO: Created: latency-svc-n9s5v Feb 12 20:40:19.552: INFO: Got endpoints: latency-svc-n9s5v [1.6870281s] Feb 12 20:40:19.568: INFO: Created: latency-svc-6vsl7 Feb 12 20:40:19.573: INFO: Got endpoints: latency-svc-6vsl7 [1.637273442s] Feb 12 20:40:19.599: INFO: Created: latency-svc-wkb79 Feb 12 20:40:19.601: INFO: Got endpoints: latency-svc-wkb79 [1.603376332s] Feb 12 20:40:19.740: INFO: Created: latency-svc-z568v Feb 12 20:40:19.746: INFO: Got endpoints: latency-svc-z568v [1.654782487s] Feb 12 20:40:19.770: INFO: Created: latency-svc-qpp8h Feb 12 20:40:19.780: INFO: Got endpoints: latency-svc-qpp8h [1.666591393s] Feb 12 20:40:19.821: INFO: Created: latency-svc-hw8zq Feb 12 20:40:19.821: INFO: Got endpoints: latency-svc-hw8zq [705.375148ms] Feb 12 20:40:19.898: INFO: Created: latency-svc-rqpg4 Feb 12 20:40:19.930: INFO: Got endpoints: latency-svc-rqpg4 [790.615197ms] Feb 12 20:40:19.932: INFO: Created: latency-svc-md276 Feb 12 20:40:19.935: INFO: Got endpoints: latency-svc-md276 [740.734034ms] Feb 12 20:40:19.968: INFO: Created: latency-svc-dwjwh Feb 12 20:40:19.969: INFO: Got endpoints: latency-svc-dwjwh [701.438776ms] Feb 12 20:40:20.028: INFO: Created: latency-svc-w2988 Feb 12 20:40:20.081: INFO: Created: latency-svc-ssc27 Feb 12 20:40:20.082: INFO: Got endpoints: latency-svc-w2988 [781.681258ms] Feb 12 20:40:20.202: INFO: Got endpoints: latency-svc-ssc27 [873.890479ms] Feb 12 20:40:20.208: INFO: Created: latency-svc-xnm95 Feb 12 20:40:20.209: INFO: Got endpoints: latency-svc-xnm95 [807.69378ms] Feb 12 20:40:20.242: INFO: Created: latency-svc-glnpb Feb 12 20:40:20.249: INFO: Got endpoints: latency-svc-glnpb [836.559749ms] Feb 12 20:40:20.263: INFO: Created: latency-svc-gfzhq Feb 12 20:40:20.269: INFO: Got endpoints: latency-svc-gfzhq [832.178051ms] Feb 12 20:40:20.290: INFO: Created: latency-svc-fk2xd Feb 12 20:40:20.295: INFO: Got endpoints: latency-svc-fk2xd [828.121217ms] Feb 12 20:40:20.329: INFO: Created: latency-svc-wgvcm Feb 12 20:40:20.356: INFO: Created: latency-svc-p9vpr Feb 12 20:40:20.358: INFO: Got endpoints: latency-svc-wgvcm [806.245728ms] Feb 12 20:40:20.365: INFO: Got endpoints: latency-svc-p9vpr [791.443112ms] Feb 12 20:40:20.385: INFO: Created: latency-svc-lslvv Feb 12 20:40:20.400: INFO: Got endpoints: latency-svc-lslvv [799.539608ms] Feb 12 20:40:20.465: INFO: Created: latency-svc-ng4gx Feb 12 20:40:20.522: INFO: Got endpoints: latency-svc-ng4gx [775.616394ms] Feb 12 20:40:20.528: INFO: Created: latency-svc-2479r Feb 12 20:40:20.563: INFO: Got endpoints: latency-svc-2479r [783.099693ms] Feb 12 20:40:20.600: INFO: Created: latency-svc-5zrkt Feb 12 20:40:20.604: INFO: Got endpoints: latency-svc-5zrkt [783.16618ms] Feb 12 20:40:20.634: INFO: Created: latency-svc-64nl6 Feb 12 20:40:20.642: INFO: Got endpoints: latency-svc-64nl6 [711.980978ms] Feb 12 20:40:20.662: INFO: Created: latency-svc-ls6zg Feb 12 20:40:20.679: INFO: Got endpoints: latency-svc-ls6zg [743.523221ms] Feb 12 20:40:20.730: INFO: Created: latency-svc-7fvjx Feb 12 20:40:20.735: INFO: Got endpoints: latency-svc-7fvjx [765.648056ms] Feb 12 20:40:20.807: INFO: Created: latency-svc-2ssrb Feb 12 20:40:20.821: INFO: Got endpoints: latency-svc-2ssrb [738.547029ms] Feb 12 20:40:20.910: INFO: Created: latency-svc-9kvtk Feb 12 20:40:20.918: INFO: Got endpoints: latency-svc-9kvtk [715.510829ms] Feb 12 20:40:21.000: INFO: Created: latency-svc-nfdr6 Feb 12 20:40:21.005: INFO: Got endpoints: latency-svc-nfdr6 [795.284392ms] Feb 12 20:40:21.101: INFO: Created: latency-svc-dqd78 Feb 12 20:40:21.135: INFO: Got endpoints: latency-svc-dqd78 [886.298881ms] Feb 12 20:40:21.135: INFO: Created: latency-svc-5mr9v Feb 12 20:40:21.140: INFO: Got endpoints: latency-svc-5mr9v [870.556381ms] Feb 12 20:40:21.313: INFO: Created: latency-svc-lpvbn Feb 12 20:40:21.349: INFO: Got endpoints: latency-svc-lpvbn [1.054807364s] Feb 12 20:40:21.356: INFO: Created: latency-svc-nndzw Feb 12 20:40:21.360: INFO: Got endpoints: latency-svc-nndzw [1.001707289s] Feb 12 20:40:21.479: INFO: Created: latency-svc-5jnr8 Feb 12 20:40:21.537: INFO: Got endpoints: latency-svc-5jnr8 [1.172393203s] Feb 12 20:40:21.547: INFO: Created: latency-svc-8k68j Feb 12 20:40:21.552: INFO: Got endpoints: latency-svc-8k68j [1.151206609s] Feb 12 20:40:21.679: INFO: Created: latency-svc-8zdbf Feb 12 20:40:21.708: INFO: Got endpoints: latency-svc-8zdbf [1.185378879s] Feb 12 20:40:21.711: INFO: Created: latency-svc-5k648 Feb 12 20:40:21.713: INFO: Got endpoints: latency-svc-5k648 [1.149503387s] Feb 12 20:40:21.736: INFO: Created: latency-svc-ltdpv Feb 12 20:40:21.744: INFO: Got endpoints: latency-svc-ltdpv [1.139452576s] Feb 12 20:40:21.774: INFO: Created: latency-svc-7cbcm Feb 12 20:40:22.359: INFO: Got endpoints: latency-svc-7cbcm [1.717213484s] Feb 12 20:40:22.408: INFO: Created: latency-svc-4pdqw Feb 12 20:40:22.421: INFO: Got endpoints: latency-svc-4pdqw [1.742636012s] Feb 12 20:40:22.444: INFO: Created: latency-svc-gltws Feb 12 20:40:22.467: INFO: Got endpoints: latency-svc-gltws [1.73142949s] Feb 12 20:40:22.586: INFO: Created: latency-svc-7sr47 Feb 12 20:40:22.607: INFO: Got endpoints: latency-svc-7sr47 [1.786284108s] Feb 12 20:40:22.630: INFO: Created: latency-svc-kg7f5 Feb 12 20:40:22.633: INFO: Got endpoints: latency-svc-kg7f5 [1.715314949s] Feb 12 20:40:22.660: INFO: Created: latency-svc-kclxx Feb 12 20:40:22.665: INFO: Got endpoints: latency-svc-kclxx [1.660713148s] Feb 12 20:40:22.666: INFO: Latencies: [46.649783ms 88.486558ms 129.78677ms 241.510765ms 250.206357ms 312.035198ms 410.07304ms 417.699965ms 461.542368ms 492.400068ms 586.926917ms 587.60531ms 621.795685ms 623.663849ms 633.82092ms 634.253379ms 650.883965ms 664.650768ms 674.717751ms 675.254276ms 678.482833ms 678.758427ms 678.901044ms 680.611884ms 681.489256ms 689.336381ms 698.626484ms 699.138175ms 701.438776ms 705.375148ms 706.478425ms 707.726531ms 709.545938ms 710.37654ms 711.980978ms 712.676732ms 715.510829ms 717.291808ms 717.860142ms 726.42231ms 728.605729ms 731.268495ms 731.751523ms 735.18884ms 736.706199ms 737.330826ms 738.547029ms 738.628714ms 740.295845ms 740.734034ms 743.523221ms 744.335839ms 748.319306ms 753.55697ms 753.768179ms 754.571118ms 755.421523ms 759.311615ms 760.202272ms 764.074939ms 765.648056ms 765.694555ms 769.772355ms 770.031186ms 772.627875ms 775.616394ms 778.042555ms 779.954537ms 781.681258ms 783.099693ms 783.11071ms 783.16618ms 785.955533ms 787.597922ms 790.615197ms 791.443112ms 791.575555ms 795.284392ms 797.078984ms 798.677985ms 799.539608ms 802.567517ms 803.205708ms 806.245728ms 807.69378ms 808.421162ms 812.795821ms 812.843603ms 814.825684ms 815.674608ms 815.770497ms 817.579737ms 817.867679ms 820.111502ms 827.05047ms 827.21212ms 827.762253ms 828.112351ms 828.121217ms 828.161378ms 830.754446ms 831.991771ms 832.178051ms 833.880671ms 834.334372ms 836.559749ms 839.460398ms 839.490451ms 840.162473ms 840.488755ms 844.347051ms 846.863045ms 851.21836ms 856.677733ms 857.771411ms 859.518578ms 859.677498ms 860.550021ms 862.007101ms 862.842196ms 863.662873ms 865.843484ms 870.556381ms 873.890479ms 877.705906ms 881.085365ms 881.625608ms 883.518863ms 886.298881ms 886.506074ms 895.449667ms 898.93742ms 902.406449ms 903.581076ms 904.729266ms 909.555076ms 914.539497ms 916.00471ms 941.609018ms 945.072876ms 985.300885ms 1.001707289s 1.017491619s 1.021933934s 1.054807364s 1.055581077s 1.078334105s 1.080471432s 1.121596309s 1.12894464s 1.139452576s 1.149503387s 1.151206609s 1.172393203s 1.172792027s 1.185378879s 1.194166562s 1.205218674s 1.271253593s 1.276444266s 1.276855802s 1.333349559s 1.337579993s 1.508030311s 1.603376332s 1.627844562s 1.637273442s 1.638031583s 1.645249654s 1.645581394s 1.654782487s 1.659815476s 1.660713148s 1.666591393s 1.678113355s 1.682875626s 1.6870281s 1.70655965s 1.711765531s 1.712334026s 1.715314949s 1.717213484s 1.73142949s 1.742636012s 1.786284108s 2.032370274s 2.049520759s 2.095052209s 2.097525625s 2.137717542s 2.183162635s 2.189181354s 2.209099235s 2.211934315s 2.220498127s 2.220616394s 2.222525466s 2.23708474s 2.265946542s 2.274121452s] Feb 12 20:40:22.666: INFO: 50 %ile: 830.754446ms Feb 12 20:40:22.666: INFO: 90 %ile: 1.715314949s Feb 12 20:40:22.666: INFO: 99 %ile: 2.265946542s Feb 12 20:40:22.666: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:40:22.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-763" for this suite. • [SLOW TEST:23.713 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":277,"completed":7,"skipped":107,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:40:22.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 20:40:23.434: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 20:40:25.451: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:40:27.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:40:29.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:40:31.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136823, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 20:40:34.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Feb 12 20:40:35.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Feb 12 20:40:36.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Feb 12 20:40:37.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Feb 12 20:40:38.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Feb 12 20:40:39.504: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:40:39.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-694" for this suite. STEP: Destroying namespace "webhook-694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.146 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":277,"completed":8,"skipped":129,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:40:39.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:40:53.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1832" for this suite. • [SLOW TEST:13.428 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":277,"completed":9,"skipped":129,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:40:53.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3717, will wait for the garbage collector to delete the pods Feb 12 20:41:03.518: INFO: Deleting Job.batch foo took: 11.467ms Feb 12 20:41:03.918: INFO: Terminating Job.batch foo pods took: 400.656079ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:41:42.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3717" for this suite. • [SLOW TEST:49.144 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":277,"completed":10,"skipped":138,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:41:42.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 20:41:42.977: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 20:41:45.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136903, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:41:47.008: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136903, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:41:49.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136903, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:41:51.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136903, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717136902, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 20:41:54.058: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:41:54.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:41:55.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7450" for this suite. STEP: Destroying namespace "webhook-7450-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.202 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":277,"completed":11,"skipped":140,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:41:55.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Feb 12 20:42:04.584: INFO: Successfully updated pod "labelsupdateab554fab-8e7e-48cc-9c0c-60b6d8e341be" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:42:08.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-446" for this suite. • [SLOW TEST:13.030 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":277,"completed":12,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:42:08.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b Feb 12 20:42:08.776: INFO: Pod name my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b: Found 0 pods out of 1 Feb 12 20:42:13.782: INFO: Pod name my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b: Found 1 pods out of 1 Feb 12 20:42:13.782: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b" are running Feb 12 20:42:17.804: INFO: Pod "my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b-s6xb5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 20:42:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 20:42:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 20:42:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 20:42:08 +0000 UTC Reason: Message:}]) Feb 12 20:42:17.805: INFO: Trying to dial the pod Feb 12 20:42:22.826: INFO: Controller my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b: Got expected result from replica 1 [my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b-s6xb5]: "my-hostname-basic-a7f04c3f-26a8-4649-b035-029fd6127c6b-s6xb5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:42:22.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-405" for this suite. • [SLOW TEST:14.165 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":277,"completed":13,"skipped":189,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:42:22.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 12 20:42:22.963: INFO: Waiting up to 5m0s for pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1" in namespace "emptydir-6909" to be "Succeeded or Failed" Feb 12 20:42:22.973: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.552966ms Feb 12 20:42:24.996: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032751356s Feb 12 20:42:27.011: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047552554s Feb 12 20:42:29.023: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05952873s Feb 12 20:42:31.063: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099562143s STEP: Saw pod success Feb 12 20:42:31.063: INFO: Pod "pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1" satisfied condition "Succeeded or Failed" Feb 12 20:42:31.065: INFO: Trying to get logs from node jerma-node pod pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1 container test-container: STEP: delete the pod Feb 12 20:42:31.197: INFO: Waiting for pod pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1 to disappear Feb 12 20:42:31.201: INFO: Pod pod-fd87b95d-b89f-48a7-b355-e25ae788f6b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:42:31.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6909" for this suite. • [SLOW TEST:8.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":14,"skipped":190,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:42:31.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-9bae81f6-883a-4804-8fe8-edcbe98f9b03 STEP: Creating a pod to test consume configMaps Feb 12 20:42:31.439: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a" in namespace "projected-8175" to be "Succeeded or Failed" Feb 12 20:42:31.459: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.222202ms Feb 12 20:42:33.469: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029890629s Feb 12 20:42:35.477: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038222053s Feb 12 20:42:37.485: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046028758s Feb 12 20:42:39.492: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053161444s STEP: Saw pod success Feb 12 20:42:39.492: INFO: Pod "pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a" satisfied condition "Succeeded or Failed" Feb 12 20:42:39.496: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a container projected-configmap-volume-test: STEP: delete the pod Feb 12 20:42:39.565: INFO: Waiting for pod pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a to disappear Feb 12 20:42:39.569: INFO: Pod pod-projected-configmaps-784edebc-36d8-4d26-9b62-84e3c5ccdb6a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:42:39.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8175" for this suite. • [SLOW TEST:8.365 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":15,"skipped":190,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:42:39.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-e8419d9c-4f7b-47ea-9b92-8d73728136c1 STEP: Creating a pod to test consume configMaps Feb 12 20:42:39.779: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b" in namespace "projected-2580" to be "Succeeded or Failed" Feb 12 20:42:39.891: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 112.000315ms Feb 12 20:42:41.899: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119857462s Feb 12 20:42:43.905: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125593732s Feb 12 20:42:45.910: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130663679s Feb 12 20:42:47.916: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136977172s Feb 12 20:42:49.921: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141713761s STEP: Saw pod success Feb 12 20:42:49.921: INFO: Pod "pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b" satisfied condition "Succeeded or Failed" Feb 12 20:42:49.923: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b container projected-configmap-volume-test: STEP: delete the pod Feb 12 20:42:49.984: INFO: Waiting for pod pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b to disappear Feb 12 20:42:49.990: INFO: Pod pod-projected-configmaps-45d81542-960c-4256-95a4-cfc4c8051c4b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:42:49.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2580" for this suite. • [SLOW TEST:10.407 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":16,"skipped":197,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:42:50.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1809.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 20:43:00.179: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.187: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.191: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.196: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.209: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.213: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.216: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.221: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:00.229: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:05.239: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.244: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.252: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.260: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.284: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.292: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.300: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.304: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:05.312: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:10.237: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.243: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.249: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.255: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.276: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.297: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.312: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.324: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:10.335: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:15.240: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.246: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.250: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.257: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.273: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.281: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.286: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.290: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:15.299: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:20.239: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.287: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.292: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.297: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.324: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.330: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.346: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.352: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:20.360: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:25.241: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.249: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.253: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.257: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.281: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.286: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.293: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.297: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local from pod dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b: the server could not find the requested resource (get pods dns-test-65308b05-f26b-4467-8a52-09b1c3db651b) Feb 12 20:43:25.308: INFO: Lookups using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1809.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1809.svc.cluster.local jessie_udp@dns-test-service-2.dns-1809.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1809.svc.cluster.local] Feb 12 20:43:30.311: INFO: DNS probes using dns-1809/dns-test-65308b05-f26b-4467-8a52-09b1c3db651b succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:43:30.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1809" for this suite. • [SLOW TEST:40.532 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":277,"completed":17,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:43:30.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:44:02.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1660" for this suite. • [SLOW TEST:32.251 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":277,"completed":18,"skipped":234,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:44:02.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-4447fc39-ca00-4563-86c8-2b2dac97d080 STEP: Creating a pod to test consume secrets Feb 12 20:44:02.947: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d" in namespace "projected-1598" to be "Succeeded or Failed" Feb 12 20:44:02.953: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.888891ms Feb 12 20:44:04.959: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011590716s Feb 12 20:44:06.965: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017979201s Feb 12 20:44:09.008: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061300421s Feb 12 20:44:11.016: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068691121s Feb 12 20:44:13.021: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074422065s STEP: Saw pod success Feb 12 20:44:13.022: INFO: Pod "pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d" satisfied condition "Succeeded or Failed" Feb 12 20:44:13.025: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d container secret-volume-test: STEP: delete the pod Feb 12 20:44:13.106: INFO: Waiting for pod pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d to disappear Feb 12 20:44:13.119: INFO: Pod pod-projected-secrets-a0ad5a91-c060-4e4d-9ea9-0d43ab7aa26d no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:44:13.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1598" for this suite. • [SLOW TEST:10.395 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":277,"completed":19,"skipped":270,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:44:13.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 20:44:14.243: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 20:44:16.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:44:18.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137054, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 20:44:21.446: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:44:21.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5232" for this suite. STEP: Destroying namespace "webhook-5232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.687 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":277,"completed":20,"skipped":278,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:44:21.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:44:22.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1352" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":277,"completed":21,"skipped":294,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:44:22.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:44:57.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6281" for this suite. STEP: Destroying namespace "nsdeletetest-962" for this suite. Feb 12 20:44:57.462: INFO: Namespace nsdeletetest-962 was already deleted STEP: Destroying namespace "nsdeletetest-2130" for this suite. • [SLOW TEST:35.392 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":277,"completed":22,"skipped":295,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:44:57.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-acfa363c-974e-48eb-95fb-62811ca0384f STEP: Creating a pod to test consume secrets Feb 12 20:44:57.621: INFO: Waiting up to 5m0s for pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed" in namespace "secrets-8939" to be "Succeeded or Failed" Feb 12 20:44:57.641: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed": Phase="Pending", Reason="", readiness=false. Elapsed: 19.392861ms Feb 12 20:44:59.647: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025368573s Feb 12 20:45:01.748: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126984609s Feb 12 20:45:03.754: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.13270373s Feb 12 20:45:05.761: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.13948946s STEP: Saw pod success Feb 12 20:45:05.761: INFO: Pod "pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed" satisfied condition "Succeeded or Failed" Feb 12 20:45:05.766: INFO: Trying to get logs from node jerma-node pod pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed container secret-volume-test: STEP: delete the pod Feb 12 20:45:05.841: INFO: Waiting for pod pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed to disappear Feb 12 20:45:05.890: INFO: Pod pod-secrets-ca58d6b6-c736-4a7b-85f2-83489f018eed no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:45:05.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8939" for this suite. • [SLOW TEST:8.449 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":23,"skipped":309,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:45:05.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-e9330fe4-5b8e-43be-9da3-693feaead1bc STEP: Creating a pod to test consume secrets Feb 12 20:45:06.127: INFO: Waiting up to 5m0s for pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14" in namespace "secrets-7573" to be "Succeeded or Failed" Feb 12 20:45:06.181: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 53.972422ms Feb 12 20:45:08.190: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0630669s Feb 12 20:45:10.232: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105082458s Feb 12 20:45:12.292: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164547997s Feb 12 20:45:14.299: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.171352588s STEP: Saw pod success Feb 12 20:45:14.299: INFO: Pod "pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14" satisfied condition "Succeeded or Failed" Feb 12 20:45:14.302: INFO: Trying to get logs from node jerma-node pod pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14 container secret-volume-test: STEP: delete the pod Feb 12 20:45:14.414: INFO: Waiting for pod pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14 to disappear Feb 12 20:45:14.418: INFO: Pod pod-secrets-ccf0e4ed-b44f-4c89-bc2a-47e8e13f2f14 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:45:14.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7573" for this suite. • [SLOW TEST:8.503 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":24,"skipped":329,"failed":0} SSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:45:14.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2992/configmap-test-99426467-6bb5-4002-9865-35236fb19cfa STEP: Creating a pod to test consume configMaps Feb 12 20:45:14.570: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da" in namespace "configmap-2992" to be "Succeeded or Failed" Feb 12 20:45:14.600: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Pending", Reason="", readiness=false. Elapsed: 29.74306ms Feb 12 20:45:16.607: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036827395s Feb 12 20:45:18.644: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073332401s Feb 12 20:45:20.674: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103533917s Feb 12 20:45:22.681: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110985995s Feb 12 20:45:24.687: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.116787876s STEP: Saw pod success Feb 12 20:45:24.687: INFO: Pod "pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da" satisfied condition "Succeeded or Failed" Feb 12 20:45:24.692: INFO: Trying to get logs from node jerma-node pod pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da container env-test: STEP: delete the pod Feb 12 20:45:24.774: INFO: Waiting for pod pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da to disappear Feb 12 20:45:24.834: INFO: Pod pod-configmaps-dc65c65a-2f83-478b-94ae-dacc3de440da no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:45:24.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2992" for this suite. • [SLOW TEST:10.427 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":277,"completed":25,"skipped":336,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:45:24.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-v67v STEP: Creating a pod to test atomic-volume-subpath Feb 12 20:45:24.988: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-v67v" in namespace "subpath-448" to be "Succeeded or Failed" Feb 12 20:45:25.006: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Pending", Reason="", readiness=false. Elapsed: 17.57052ms Feb 12 20:45:27.012: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024143775s Feb 12 20:45:29.021: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032570591s Feb 12 20:45:31.025: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036624939s Feb 12 20:45:33.031: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 8.042580015s Feb 12 20:45:35.037: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 10.048411655s Feb 12 20:45:37.051: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 12.062994831s Feb 12 20:45:39.078: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 14.089718079s Feb 12 20:45:41.085: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 16.096490478s Feb 12 20:45:43.091: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 18.103207053s Feb 12 20:45:45.098: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 20.1097697s Feb 12 20:45:47.103: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 22.115248194s Feb 12 20:45:49.115: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 24.12699266s Feb 12 20:45:51.120: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Running", Reason="", readiness=true. Elapsed: 26.132271673s Feb 12 20:45:53.127: INFO: Pod "pod-subpath-test-downwardapi-v67v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.138595957s STEP: Saw pod success Feb 12 20:45:53.127: INFO: Pod "pod-subpath-test-downwardapi-v67v" satisfied condition "Succeeded or Failed" Feb 12 20:45:53.129: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-v67v container test-container-subpath-downwardapi-v67v: STEP: delete the pod Feb 12 20:45:53.312: INFO: Waiting for pod pod-subpath-test-downwardapi-v67v to disappear Feb 12 20:45:53.320: INFO: Pod pod-subpath-test-downwardapi-v67v no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-v67v Feb 12 20:45:53.320: INFO: Deleting pod "pod-subpath-test-downwardapi-v67v" in namespace "subpath-448" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:45:53.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-448" for this suite. • [SLOW TEST:28.524 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":277,"completed":26,"skipped":349,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:45:53.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:45:53.484: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Feb 12 20:45:55.540: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:45:56.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2939" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":277,"completed":27,"skipped":350,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:45:56.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:45:57.935: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ecaa0bd5-63d3-4add-ae71-5a946787f09b", Controller:(*bool)(0xc00051ab22), BlockOwnerDeletion:(*bool)(0xc00051ab23)}} Feb 12 20:45:57.962: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"bd5345b2-10b6-43d2-80df-87d4d4cc78ee", Controller:(*bool)(0xc00291406a), BlockOwnerDeletion:(*bool)(0xc00291406b)}} Feb 12 20:45:58.019: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"8cf13732-296f-43c2-b89a-b4e084feadde", Controller:(*bool)(0xc0029145aa), BlockOwnerDeletion:(*bool)(0xc0029145ab)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:46:05.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4686" for this suite. • [SLOW TEST:9.498 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":277,"completed":28,"skipped":354,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:46:06.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 12 20:46:06.830: INFO: Number of nodes with available pods: 0 Feb 12 20:46:06.830: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:08.786: INFO: Number of nodes with available pods: 0 Feb 12 20:46:08.786: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:09.716: INFO: Number of nodes with available pods: 0 Feb 12 20:46:09.716: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:09.840: INFO: Number of nodes with available pods: 0 Feb 12 20:46:09.840: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:11.139: INFO: Number of nodes with available pods: 0 Feb 12 20:46:11.139: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:12.070: INFO: Number of nodes with available pods: 0 Feb 12 20:46:12.070: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:13.007: INFO: Number of nodes with available pods: 0 Feb 12 20:46:13.007: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:14.231: INFO: Number of nodes with available pods: 0 Feb 12 20:46:14.231: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:15.082: INFO: Number of nodes with available pods: 0 Feb 12 20:46:15.082: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:15.851: INFO: Number of nodes with available pods: 0 Feb 12 20:46:15.851: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:16.845: INFO: Number of nodes with available pods: 0 Feb 12 20:46:16.845: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:19.311: INFO: Number of nodes with available pods: 0 Feb 12 20:46:19.311: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:21.535: INFO: Number of nodes with available pods: 0 Feb 12 20:46:21.536: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:21.880: INFO: Number of nodes with available pods: 0 Feb 12 20:46:21.880: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:22.847: INFO: Number of nodes with available pods: 0 Feb 12 20:46:22.847: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:23.844: INFO: Number of nodes with available pods: 0 Feb 12 20:46:23.845: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:24.844: INFO: Number of nodes with available pods: 1 Feb 12 20:46:24.844: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:25.844: INFO: Number of nodes with available pods: 1 Feb 12 20:46:25.844: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:26.851: INFO: Number of nodes with available pods: 1 Feb 12 20:46:26.851: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:46:27.843: INFO: Number of nodes with available pods: 2 Feb 12 20:46:27.843: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 12 20:46:27.913: INFO: Number of nodes with available pods: 1 Feb 12 20:46:27.913: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:28.957: INFO: Number of nodes with available pods: 1 Feb 12 20:46:28.957: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:30.026: INFO: Number of nodes with available pods: 1 Feb 12 20:46:30.026: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:30.927: INFO: Number of nodes with available pods: 1 Feb 12 20:46:30.927: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:31.940: INFO: Number of nodes with available pods: 1 Feb 12 20:46:31.941: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:33.010: INFO: Number of nodes with available pods: 1 Feb 12 20:46:33.010: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:33.925: INFO: Number of nodes with available pods: 1 Feb 12 20:46:33.925: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:35.366: INFO: Number of nodes with available pods: 1 Feb 12 20:46:35.366: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:36.115: INFO: Number of nodes with available pods: 1 Feb 12 20:46:36.115: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:36.928: INFO: Number of nodes with available pods: 1 Feb 12 20:46:36.928: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:37.950: INFO: Number of nodes with available pods: 1 Feb 12 20:46:37.950: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:46:38.934: INFO: Number of nodes with available pods: 2 Feb 12 20:46:38.934: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9080, will wait for the garbage collector to delete the pods Feb 12 20:46:39.011: INFO: Deleting DaemonSet.extensions daemon-set took: 9.974473ms Feb 12 20:46:39.411: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.293757ms Feb 12 20:46:52.419: INFO: Number of nodes with available pods: 0 Feb 12 20:46:52.419: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 20:46:52.424: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9080/daemonsets","resourceVersion":"8013667"},"items":null} Feb 12 20:46:52.429: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9080/pods","resourceVersion":"8013667"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:46:52.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9080" for this suite. • [SLOW TEST:46.379 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":277,"completed":29,"skipped":358,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:46:52.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Starting the proxy Feb 12 20:46:52.558: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix687984736/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:46:52.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-459" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":277,"completed":30,"skipped":372,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:46:52.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:46:52.746: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:00.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8547" for this suite. • [SLOW TEST:8.267 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":277,"completed":31,"skipped":376,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:00.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Feb 12 20:47:09.611: INFO: Successfully updated pod "annotationupdatee3d5eb8e-79eb-490f-9e77-d5264ad4689d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:11.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5632" for this suite. • [SLOW TEST:10.786 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":277,"completed":32,"skipped":387,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:11.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Feb 12 20:47:11.772: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:26.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4519" for this suite. • [SLOW TEST:14.405 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":277,"completed":33,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:26.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 12 20:47:26.306: INFO: Waiting up to 5m0s for pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5" in namespace "emptydir-4923" to be "Succeeded or Failed" Feb 12 20:47:26.322: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.197925ms Feb 12 20:47:28.329: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023165742s Feb 12 20:47:30.336: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030161206s Feb 12 20:47:32.347: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041170346s Feb 12 20:47:34.353: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046511633s Feb 12 20:47:36.362: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055955906s STEP: Saw pod success Feb 12 20:47:36.362: INFO: Pod "pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5" satisfied condition "Succeeded or Failed" Feb 12 20:47:36.368: INFO: Trying to get logs from node jerma-node pod pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5 container test-container: STEP: delete the pod Feb 12 20:47:36.427: INFO: Waiting for pod pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5 to disappear Feb 12 20:47:36.432: INFO: Pod pod-8747bc12-a28c-46d4-b2fa-a714e47a50d5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:36.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4923" for this suite. • [SLOW TEST:10.356 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":34,"skipped":440,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:36.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:47:36.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722" in namespace "downward-api-6964" to be "Succeeded or Failed" Feb 12 20:47:36.569: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722": Phase="Pending", Reason="", readiness=false. Elapsed: 6.930858ms Feb 12 20:47:38.602: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039508571s Feb 12 20:47:40.610: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047959494s Feb 12 20:47:42.619: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057372211s Feb 12 20:47:44.624: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061570202s STEP: Saw pod success Feb 12 20:47:44.624: INFO: Pod "downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722" satisfied condition "Succeeded or Failed" Feb 12 20:47:44.626: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722 container client-container: STEP: delete the pod Feb 12 20:47:44.689: INFO: Waiting for pod downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722 to disappear Feb 12 20:47:44.705: INFO: Pod downwardapi-volume-2b1a94bc-3b9d-4f36-b5a7-994d38100722 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:44.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6964" for this suite. • [SLOW TEST:8.262 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":277,"completed":35,"skipped":440,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:44.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 12 20:47:44.931: INFO: Waiting up to 5m0s for pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd" in namespace "emptydir-351" to be "Succeeded or Failed" Feb 12 20:47:45.012: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Pending", Reason="", readiness=false. Elapsed: 80.983118ms Feb 12 20:47:47.018: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086265361s Feb 12 20:47:49.030: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098706367s Feb 12 20:47:51.128: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196964833s Feb 12 20:47:53.138: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.206300236s Feb 12 20:47:55.156: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224989029s STEP: Saw pod success Feb 12 20:47:55.156: INFO: Pod "pod-bb26a99b-01c1-4b43-91af-177b3161efcd" satisfied condition "Succeeded or Failed" Feb 12 20:47:55.161: INFO: Trying to get logs from node jerma-node pod pod-bb26a99b-01c1-4b43-91af-177b3161efcd container test-container: STEP: delete the pod Feb 12 20:47:55.197: INFO: Waiting for pod pod-bb26a99b-01c1-4b43-91af-177b3161efcd to disappear Feb 12 20:47:55.203: INFO: Pod pod-bb26a99b-01c1-4b43-91af-177b3161efcd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:47:55.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-351" for this suite. • [SLOW TEST:10.506 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":36,"skipped":479,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:47:55.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8135 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8135 I0212 20:47:55.592665 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8135, replica count: 2 I0212 20:47:58.643618 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:48:01.644087 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:48:04.644993 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:48:07.645521 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 20:48:07.645: INFO: Creating new exec pod Feb 12 20:48:14.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8135 execpodt7pdb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 12 20:48:17.326: INFO: stderr: "I0212 20:48:17.139547 1074 log.go:172] (0xc0000f5080) (0xc0006fff40) Create stream\nI0212 20:48:17.139629 1074 log.go:172] (0xc0000f5080) (0xc0006fff40) Stream added, broadcasting: 1\nI0212 20:48:17.145952 1074 log.go:172] (0xc0000f5080) Reply frame received for 1\nI0212 20:48:17.146023 1074 log.go:172] (0xc0000f5080) (0xc0005bc820) Create stream\nI0212 20:48:17.146039 1074 log.go:172] (0xc0000f5080) (0xc0005bc820) Stream added, broadcasting: 3\nI0212 20:48:17.147555 1074 log.go:172] (0xc0000f5080) Reply frame received for 3\nI0212 20:48:17.147590 1074 log.go:172] (0xc0000f5080) (0xc0007674a0) Create stream\nI0212 20:48:17.147605 1074 log.go:172] (0xc0000f5080) (0xc0007674a0) Stream added, broadcasting: 5\nI0212 20:48:17.149336 1074 log.go:172] (0xc0000f5080) Reply frame received for 5\nI0212 20:48:17.224071 1074 log.go:172] (0xc0000f5080) Data frame received for 5\nI0212 20:48:17.224189 1074 log.go:172] (0xc0007674a0) (5) Data frame handling\nI0212 20:48:17.224308 1074 log.go:172] (0xc0007674a0) (5) Data frame sent\n+ nc -zv -t -wI0212 20:48:17.225181 1074 log.go:172] (0xc0000f5080) Data frame received for 5\nI0212 20:48:17.225199 1074 log.go:172] (0xc0007674a0) (5) Data frame handling\nI0212 20:48:17.225207 1074 log.go:172] (0xc0007674a0) (5) Data frame sent\n 2 externalname-service 80I0212 20:48:17.225297 1074 log.go:172] (0xc0000f5080) Data frame received for 5\nI0212 20:48:17.225336 1074 log.go:172] (0xc0007674a0) (5) Data frame handling\nI0212 20:48:17.225352 1074 log.go:172] (0xc0007674a0) (5) Data frame sent\n\nI0212 20:48:17.240763 1074 log.go:172] (0xc0000f5080) Data frame received for 5\nI0212 20:48:17.240879 1074 log.go:172] (0xc0007674a0) (5) Data frame handling\nI0212 20:48:17.240972 1074 log.go:172] (0xc0007674a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0212 20:48:17.314594 1074 log.go:172] (0xc0000f5080) Data frame received for 1\nI0212 20:48:17.314795 1074 log.go:172] (0xc0006fff40) (1) Data frame handling\nI0212 20:48:17.315042 1074 log.go:172] (0xc0006fff40) (1) Data frame sent\nI0212 20:48:17.317235 1074 log.go:172] (0xc0000f5080) (0xc0006fff40) Stream removed, broadcasting: 1\nI0212 20:48:17.317868 1074 log.go:172] (0xc0000f5080) (0xc0007674a0) Stream removed, broadcasting: 5\nI0212 20:48:17.318076 1074 log.go:172] (0xc0000f5080) (0xc0005bc820) Stream removed, broadcasting: 3\nI0212 20:48:17.318182 1074 log.go:172] (0xc0000f5080) Go away received\nI0212 20:48:17.318889 1074 log.go:172] (0xc0000f5080) (0xc0006fff40) Stream removed, broadcasting: 1\nI0212 20:48:17.319002 1074 log.go:172] (0xc0000f5080) (0xc0005bc820) Stream removed, broadcasting: 3\nI0212 20:48:17.319121 1074 log.go:172] (0xc0000f5080) (0xc0007674a0) Stream removed, broadcasting: 5\n" Feb 12 20:48:17.326: INFO: stdout: "" Feb 12 20:48:17.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8135 execpodt7pdb -- /bin/sh -x -c nc -zv -t -w 2 10.96.135.63 80' Feb 12 20:48:17.668: INFO: stderr: "I0212 20:48:17.469738 1099 log.go:172] (0xc000b3ad10) (0xc000a221e0) Create stream\nI0212 20:48:17.469844 1099 log.go:172] (0xc000b3ad10) (0xc000a221e0) Stream added, broadcasting: 1\nI0212 20:48:17.472804 1099 log.go:172] (0xc000b3ad10) Reply frame received for 1\nI0212 20:48:17.472837 1099 log.go:172] (0xc000b3ad10) (0xc0009f4000) Create stream\nI0212 20:48:17.472872 1099 log.go:172] (0xc000b3ad10) (0xc0009f4000) Stream added, broadcasting: 3\nI0212 20:48:17.474320 1099 log.go:172] (0xc000b3ad10) Reply frame received for 3\nI0212 20:48:17.474406 1099 log.go:172] (0xc000b3ad10) (0xc000a22320) Create stream\nI0212 20:48:17.474434 1099 log.go:172] (0xc000b3ad10) (0xc000a22320) Stream added, broadcasting: 5\nI0212 20:48:17.477711 1099 log.go:172] (0xc000b3ad10) Reply frame received for 5\nI0212 20:48:17.538897 1099 log.go:172] (0xc000b3ad10) Data frame received for 5\nI0212 20:48:17.538934 1099 log.go:172] (0xc000a22320) (5) Data frame handling\nI0212 20:48:17.538953 1099 log.go:172] (0xc000a22320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.135.63 80\nI0212 20:48:17.539538 1099 log.go:172] (0xc000b3ad10) Data frame received for 5\nI0212 20:48:17.539549 1099 log.go:172] (0xc000a22320) (5) Data frame handling\nI0212 20:48:17.539561 1099 log.go:172] (0xc000a22320) (5) Data frame sent\nConnection to 10.96.135.63 80 port [tcp/http] succeeded!\nI0212 20:48:17.644339 1099 log.go:172] (0xc000b3ad10) Data frame received for 1\nI0212 20:48:17.645625 1099 log.go:172] (0xc000a221e0) (1) Data frame handling\nI0212 20:48:17.645712 1099 log.go:172] (0xc000a221e0) (1) Data frame sent\nI0212 20:48:17.645763 1099 log.go:172] (0xc000b3ad10) (0xc000a221e0) Stream removed, broadcasting: 1\nI0212 20:48:17.646891 1099 log.go:172] (0xc000b3ad10) (0xc000a22320) Stream removed, broadcasting: 5\nI0212 20:48:17.647108 1099 log.go:172] (0xc000b3ad10) (0xc0009f4000) Stream removed, broadcasting: 3\nI0212 20:48:17.647185 1099 log.go:172] (0xc000b3ad10) Go away received\nI0212 20:48:17.651723 1099 log.go:172] (0xc000b3ad10) (0xc000a221e0) Stream removed, broadcasting: 1\nI0212 20:48:17.651941 1099 log.go:172] (0xc000b3ad10) (0xc0009f4000) Stream removed, broadcasting: 3\nI0212 20:48:17.652022 1099 log.go:172] (0xc000b3ad10) (0xc000a22320) Stream removed, broadcasting: 5\n" Feb 12 20:48:17.668: INFO: stdout: "" Feb 12 20:48:17.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8135 execpodt7pdb -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30603' Feb 12 20:48:17.961: INFO: stderr: "I0212 20:48:17.790232 1119 log.go:172] (0xc000810a50) (0xc00080c000) Create stream\nI0212 20:48:17.790320 1119 log.go:172] (0xc000810a50) (0xc00080c000) Stream added, broadcasting: 1\nI0212 20:48:17.797881 1119 log.go:172] (0xc000810a50) Reply frame received for 1\nI0212 20:48:17.797982 1119 log.go:172] (0xc000810a50) (0xc0005fdcc0) Create stream\nI0212 20:48:17.797994 1119 log.go:172] (0xc000810a50) (0xc0005fdcc0) Stream added, broadcasting: 3\nI0212 20:48:17.801343 1119 log.go:172] (0xc000810a50) Reply frame received for 3\nI0212 20:48:17.801393 1119 log.go:172] (0xc000810a50) (0xc00080c140) Create stream\nI0212 20:48:17.801403 1119 log.go:172] (0xc000810a50) (0xc00080c140) Stream added, broadcasting: 5\nI0212 20:48:17.804960 1119 log.go:172] (0xc000810a50) Reply frame received for 5\nI0212 20:48:17.880028 1119 log.go:172] (0xc000810a50) Data frame received for 5\nI0212 20:48:17.880093 1119 log.go:172] (0xc00080c140) (5) Data frame handling\nI0212 20:48:17.880123 1119 log.go:172] (0xc00080c140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30603\nI0212 20:48:17.881509 1119 log.go:172] (0xc000810a50) Data frame received for 5\nI0212 20:48:17.881520 1119 log.go:172] (0xc00080c140) (5) Data frame handling\nI0212 20:48:17.881528 1119 log.go:172] (0xc00080c140) (5) Data frame sent\nConnection to 10.96.2.250 30603 port [tcp/30603] succeeded!\nI0212 20:48:17.955915 1119 log.go:172] (0xc000810a50) (0xc0005fdcc0) Stream removed, broadcasting: 3\nI0212 20:48:17.956202 1119 log.go:172] (0xc000810a50) Data frame received for 1\nI0212 20:48:17.956435 1119 log.go:172] (0xc000810a50) (0xc00080c140) Stream removed, broadcasting: 5\nI0212 20:48:17.956490 1119 log.go:172] (0xc00080c000) (1) Data frame handling\nI0212 20:48:17.956512 1119 log.go:172] (0xc00080c000) (1) Data frame sent\nI0212 20:48:17.956527 1119 log.go:172] (0xc000810a50) (0xc00080c000) Stream removed, broadcasting: 1\nI0212 20:48:17.956552 1119 log.go:172] (0xc000810a50) Go away received\nI0212 20:48:17.957410 1119 log.go:172] (0xc000810a50) (0xc00080c000) Stream removed, broadcasting: 1\nI0212 20:48:17.957434 1119 log.go:172] (0xc000810a50) (0xc0005fdcc0) Stream removed, broadcasting: 3\nI0212 20:48:17.957445 1119 log.go:172] (0xc000810a50) (0xc00080c140) Stream removed, broadcasting: 5\n" Feb 12 20:48:17.961: INFO: stdout: "" Feb 12 20:48:17.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8135 execpodt7pdb -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30603' Feb 12 20:48:18.365: INFO: stderr: "I0212 20:48:18.163313 1140 log.go:172] (0xc0009700b0) (0xc0008e81e0) Create stream\nI0212 20:48:18.163516 1140 log.go:172] (0xc0009700b0) (0xc0008e81e0) Stream added, broadcasting: 1\nI0212 20:48:18.167364 1140 log.go:172] (0xc0009700b0) Reply frame received for 1\nI0212 20:48:18.167410 1140 log.go:172] (0xc0009700b0) (0xc000900000) Create stream\nI0212 20:48:18.167420 1140 log.go:172] (0xc0009700b0) (0xc000900000) Stream added, broadcasting: 3\nI0212 20:48:18.170689 1140 log.go:172] (0xc0009700b0) Reply frame received for 3\nI0212 20:48:18.170859 1140 log.go:172] (0xc0009700b0) (0xc0007c0140) Create stream\nI0212 20:48:18.170976 1140 log.go:172] (0xc0009700b0) (0xc0007c0140) Stream added, broadcasting: 5\nI0212 20:48:18.183018 1140 log.go:172] (0xc0009700b0) Reply frame received for 5\nI0212 20:48:18.249303 1140 log.go:172] (0xc0009700b0) Data frame received for 5\nI0212 20:48:18.249478 1140 log.go:172] (0xc0007c0140) (5) Data frame handling\nI0212 20:48:18.249528 1140 log.go:172] (0xc0007c0140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30603\nI0212 20:48:18.257812 1140 log.go:172] (0xc0009700b0) Data frame received for 5\nI0212 20:48:18.257951 1140 log.go:172] (0xc0007c0140) (5) Data frame handling\nI0212 20:48:18.258020 1140 log.go:172] (0xc0007c0140) (5) Data frame sent\nConnection to 10.96.1.234 30603 port [tcp/30603] succeeded!\nI0212 20:48:18.357365 1140 log.go:172] (0xc0009700b0) (0xc000900000) Stream removed, broadcasting: 3\nI0212 20:48:18.357641 1140 log.go:172] (0xc0009700b0) Data frame received for 1\nI0212 20:48:18.357678 1140 log.go:172] (0xc0009700b0) (0xc0007c0140) Stream removed, broadcasting: 5\nI0212 20:48:18.357838 1140 log.go:172] (0xc0008e81e0) (1) Data frame handling\nI0212 20:48:18.357871 1140 log.go:172] (0xc0008e81e0) (1) Data frame sent\nI0212 20:48:18.357885 1140 log.go:172] (0xc0009700b0) (0xc0008e81e0) Stream removed, broadcasting: 1\nI0212 20:48:18.357908 1140 log.go:172] (0xc0009700b0) Go away received\nI0212 20:48:18.359049 1140 log.go:172] (0xc0009700b0) (0xc0008e81e0) Stream removed, broadcasting: 1\nI0212 20:48:18.359065 1140 log.go:172] (0xc0009700b0) (0xc000900000) Stream removed, broadcasting: 3\nI0212 20:48:18.359080 1140 log.go:172] (0xc0009700b0) (0xc0007c0140) Stream removed, broadcasting: 5\n" Feb 12 20:48:18.365: INFO: stdout: "" Feb 12 20:48:18.365: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:48:18.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8135" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696 • [SLOW TEST:23.200 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":277,"completed":37,"skipped":485,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:48:18.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 20:48:19.310: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 20:48:21.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:48:23.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:48:25.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:48:27.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:48:29.331: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137299, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 20:48:32.643: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:48:32.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9046-crds.webhook.example.com via the AdmissionRegistration API Feb 12 20:48:33.268: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:48:34.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9512" for this suite. STEP: Destroying namespace "webhook-9512-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.698 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":277,"completed":38,"skipped":493,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:48:34.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-5c0319e3-b314-4e9c-8adf-ee7efd8f8b87 STEP: Creating a pod to test consume secrets Feb 12 20:48:34.205: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9" in namespace "projected-4686" to be "Succeeded or Failed" Feb 12 20:48:34.219: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.841435ms Feb 12 20:48:36.224: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019353929s Feb 12 20:48:38.229: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024191446s Feb 12 20:48:40.259: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054299968s Feb 12 20:48:42.304: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099410533s Feb 12 20:48:44.316: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.111079033s STEP: Saw pod success Feb 12 20:48:44.316: INFO: Pod "pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9" satisfied condition "Succeeded or Failed" Feb 12 20:48:44.319: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9 container projected-secret-volume-test: STEP: delete the pod Feb 12 20:48:44.520: INFO: Waiting for pod pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9 to disappear Feb 12 20:48:44.528: INFO: Pod pod-projected-secrets-6aa88b97-d2b2-4e90-8f78-a9fc80179fc9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:48:44.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4686" for this suite. • [SLOW TEST:10.416 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":39,"skipped":507,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:48:44.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 12 20:48:44.666: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014277 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Feb 12 20:48:44.666: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014278 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 12 20:48:44.666: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014279 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 12 20:48:54.705: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014312 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 12 20:48:54.705: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014313 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 12 20:48:54.705: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7003 /api/v1/namespaces/watch-7003/configmaps/e2e-watch-test-label-changed 2b7999f1-2388-4dfd-bd3e-8d0c4e234692 8014314 0 2020-02-12 20:48:44 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:48:54.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7003" for this suite. • [SLOW TEST:10.200 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":277,"completed":40,"skipped":516,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:48:54.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Feb 12 20:49:07.005: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5980 PodName:pod-sharedvolume-f6d1826b-16c0-4ac7-9ffe-ed6c476637e7 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 20:49:07.005: INFO: >>> kubeConfig: /root/.kube/config I0212 20:49:07.053134 9 log.go:172] (0xc002845340) (0xc0013e7040) Create stream I0212 20:49:07.053205 9 log.go:172] (0xc002845340) (0xc0013e7040) Stream added, broadcasting: 1 I0212 20:49:07.057146 9 log.go:172] (0xc002845340) Reply frame received for 1 I0212 20:49:07.057167 9 log.go:172] (0xc002845340) (0xc001be1860) Create stream I0212 20:49:07.057174 9 log.go:172] (0xc002845340) (0xc001be1860) Stream added, broadcasting: 3 I0212 20:49:07.058611 9 log.go:172] (0xc002845340) Reply frame received for 3 I0212 20:49:07.058636 9 log.go:172] (0xc002845340) (0xc000a91f40) Create stream I0212 20:49:07.058643 9 log.go:172] (0xc002845340) (0xc000a91f40) Stream added, broadcasting: 5 I0212 20:49:07.060701 9 log.go:172] (0xc002845340) Reply frame received for 5 I0212 20:49:07.143875 9 log.go:172] (0xc002845340) Data frame received for 3 I0212 20:49:07.143949 9 log.go:172] (0xc001be1860) (3) Data frame handling I0212 20:49:07.143980 9 log.go:172] (0xc001be1860) (3) Data frame sent I0212 20:49:07.216630 9 log.go:172] (0xc002845340) (0xc001be1860) Stream removed, broadcasting: 3 I0212 20:49:07.216760 9 log.go:172] (0xc002845340) Data frame received for 1 I0212 20:49:07.216790 9 log.go:172] (0xc0013e7040) (1) Data frame handling I0212 20:49:07.216814 9 log.go:172] (0xc0013e7040) (1) Data frame sent I0212 20:49:07.216838 9 log.go:172] (0xc002845340) (0xc0013e7040) Stream removed, broadcasting: 1 I0212 20:49:07.216903 9 log.go:172] (0xc002845340) (0xc000a91f40) Stream removed, broadcasting: 5 I0212 20:49:07.216936 9 log.go:172] (0xc002845340) Go away received I0212 20:49:07.217764 9 log.go:172] (0xc002845340) (0xc0013e7040) Stream removed, broadcasting: 1 I0212 20:49:07.217780 9 log.go:172] (0xc002845340) (0xc001be1860) Stream removed, broadcasting: 3 I0212 20:49:07.217841 9 log.go:172] (0xc002845340) (0xc000a91f40) Stream removed, broadcasting: 5 Feb 12 20:49:07.217: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:49:07.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5980" for this suite. • [SLOW TEST:12.584 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":277,"completed":41,"skipped":531,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:49:07.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:49:07.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba" in namespace "downward-api-7121" to be "Succeeded or Failed" Feb 12 20:49:07.434: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Pending", Reason="", readiness=false. Elapsed: 34.686069ms Feb 12 20:49:09.442: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04236481s Feb 12 20:49:11.790: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390220402s Feb 12 20:49:13.802: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.401825426s Feb 12 20:49:15.816: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.416030757s Feb 12 20:49:17.826: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.426311965s STEP: Saw pod success Feb 12 20:49:17.826: INFO: Pod "downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba" satisfied condition "Succeeded or Failed" Feb 12 20:49:17.830: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba container client-container: STEP: delete the pod Feb 12 20:49:17.900: INFO: Waiting for pod downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba to disappear Feb 12 20:49:18.021: INFO: Pod downwardapi-volume-c5791258-d9f4-475e-8741-d733d9ac30ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:49:18.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7121" for this suite. • [SLOW TEST:10.736 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":42,"skipped":543,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:49:18.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:49:18.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4048" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":277,"completed":43,"skipped":586,"failed":0} ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:49:18.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-2723 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 20:49:18.492: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 12 20:49:18.660: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 20:49:20.668: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 20:49:22.667: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 20:49:24.981: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 20:49:26.672: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 20:49:28.669: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 20:49:30.665: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 20:49:32.666: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 20:49:34.667: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 20:49:36.676: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 12 20:49:36.686: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 12 20:49:38.694: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 12 20:49:40.693: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 12 20:49:42.694: INFO: The status of Pod netserver-1 is Running (Ready = false) Feb 12 20:49:44.694: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 12 20:49:52.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-2723 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 20:49:52.728: INFO: >>> kubeConfig: /root/.kube/config I0212 20:49:52.768400 9 log.go:172] (0xc002c5c4d0) (0xc001932820) Create stream I0212 20:49:52.768526 9 log.go:172] (0xc002c5c4d0) (0xc001932820) Stream added, broadcasting: 1 I0212 20:49:52.772649 9 log.go:172] (0xc002c5c4d0) Reply frame received for 1 I0212 20:49:52.772710 9 log.go:172] (0xc002c5c4d0) (0xc000b772c0) Create stream I0212 20:49:52.772722 9 log.go:172] (0xc002c5c4d0) (0xc000b772c0) Stream added, broadcasting: 3 I0212 20:49:52.775731 9 log.go:172] (0xc002c5c4d0) Reply frame received for 3 I0212 20:49:52.775862 9 log.go:172] (0xc002c5c4d0) (0xc000c1ef00) Create stream I0212 20:49:52.775882 9 log.go:172] (0xc002c5c4d0) (0xc000c1ef00) Stream added, broadcasting: 5 I0212 20:49:52.778667 9 log.go:172] (0xc002c5c4d0) Reply frame received for 5 I0212 20:49:52.862769 9 log.go:172] (0xc002c5c4d0) Data frame received for 3 I0212 20:49:52.862819 9 log.go:172] (0xc000b772c0) (3) Data frame handling I0212 20:49:52.862834 9 log.go:172] (0xc000b772c0) (3) Data frame sent I0212 20:49:52.933661 9 log.go:172] (0xc002c5c4d0) (0xc000b772c0) Stream removed, broadcasting: 3 I0212 20:49:52.933718 9 log.go:172] (0xc002c5c4d0) Data frame received for 1 I0212 20:49:52.933727 9 log.go:172] (0xc001932820) (1) Data frame handling I0212 20:49:52.933739 9 log.go:172] (0xc001932820) (1) Data frame sent I0212 20:49:52.933763 9 log.go:172] (0xc002c5c4d0) (0xc001932820) Stream removed, broadcasting: 1 I0212 20:49:52.933883 9 log.go:172] (0xc002c5c4d0) (0xc000c1ef00) Stream removed, broadcasting: 5 I0212 20:49:52.934052 9 log.go:172] (0xc002c5c4d0) Go away received I0212 20:49:52.934152 9 log.go:172] (0xc002c5c4d0) (0xc001932820) Stream removed, broadcasting: 1 I0212 20:49:52.934169 9 log.go:172] (0xc002c5c4d0) (0xc000b772c0) Stream removed, broadcasting: 3 I0212 20:49:52.934184 9 log.go:172] (0xc002c5c4d0) (0xc000c1ef00) Stream removed, broadcasting: 5 Feb 12 20:49:52.934: INFO: Waiting for responses: map[] Feb 12 20:49:52.949: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-2723 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 20:49:52.949: INFO: >>> kubeConfig: /root/.kube/config I0212 20:49:52.991407 9 log.go:172] (0xc0027da580) (0xc000c1f720) Create stream I0212 20:49:52.991456 9 log.go:172] (0xc0027da580) (0xc000c1f720) Stream added, broadcasting: 1 I0212 20:49:52.994287 9 log.go:172] (0xc0027da580) Reply frame received for 1 I0212 20:49:52.994309 9 log.go:172] (0xc0027da580) (0xc001932a00) Create stream I0212 20:49:52.994318 9 log.go:172] (0xc0027da580) (0xc001932a00) Stream added, broadcasting: 3 I0212 20:49:52.995324 9 log.go:172] (0xc0027da580) Reply frame received for 3 I0212 20:49:52.995341 9 log.go:172] (0xc0027da580) (0xc001932e60) Create stream I0212 20:49:52.995348 9 log.go:172] (0xc0027da580) (0xc001932e60) Stream added, broadcasting: 5 I0212 20:49:52.996277 9 log.go:172] (0xc0027da580) Reply frame received for 5 I0212 20:49:53.077749 9 log.go:172] (0xc0027da580) Data frame received for 3 I0212 20:49:53.077787 9 log.go:172] (0xc001932a00) (3) Data frame handling I0212 20:49:53.077800 9 log.go:172] (0xc001932a00) (3) Data frame sent I0212 20:49:53.137776 9 log.go:172] (0xc0027da580) Data frame received for 1 I0212 20:49:53.137877 9 log.go:172] (0xc0027da580) (0xc001932a00) Stream removed, broadcasting: 3 I0212 20:49:53.137951 9 log.go:172] (0xc000c1f720) (1) Data frame handling I0212 20:49:53.137972 9 log.go:172] (0xc000c1f720) (1) Data frame sent I0212 20:49:53.138008 9 log.go:172] (0xc0027da580) (0xc001932e60) Stream removed, broadcasting: 5 I0212 20:49:53.138060 9 log.go:172] (0xc0027da580) (0xc000c1f720) Stream removed, broadcasting: 1 I0212 20:49:53.138076 9 log.go:172] (0xc0027da580) Go away received I0212 20:49:53.138227 9 log.go:172] (0xc0027da580) (0xc000c1f720) Stream removed, broadcasting: 1 I0212 20:49:53.138237 9 log.go:172] (0xc0027da580) (0xc001932a00) Stream removed, broadcasting: 3 I0212 20:49:53.138243 9 log.go:172] (0xc0027da580) (0xc001932e60) Stream removed, broadcasting: 5 Feb 12 20:49:53.138: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:49:53.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2723" for this suite. • [SLOW TEST:34.770 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":277,"completed":44,"skipped":586,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:49:53.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-41038cf9-9191-40b4-ae04-6cc5cc8b016c STEP: Creating configMap with name cm-test-opt-upd-5420ba5d-fccf-483f-ba68-db4f5ba9e606 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-41038cf9-9191-40b4-ae04-6cc5cc8b016c STEP: Updating configmap cm-test-opt-upd-5420ba5d-fccf-483f-ba68-db4f5ba9e606 STEP: Creating configMap with name cm-test-opt-create-c0da8eb6-74f1-47b1-a4d3-e3d6e3be3488 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:09.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5867" for this suite. • [SLOW TEST:16.373 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":45,"skipped":646,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:09.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Feb 12 20:50:09.699: INFO: Pod name pod-release: Found 0 pods out of 1 Feb 12 20:50:14.706: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:14.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9329" for this suite. • [SLOW TEST:5.355 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":277,"completed":46,"skipped":653,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:14.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Feb 12 20:50:15.169: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:31.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4676" for this suite. • [SLOW TEST:17.111 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":277,"completed":47,"skipped":653,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:31.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Feb 12 20:50:32.149: INFO: Waiting up to 5m0s for pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae" in namespace "emptydir-1857" to be "Succeeded or Failed" Feb 12 20:50:32.177: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae": Phase="Pending", Reason="", readiness=false. Elapsed: 27.371508ms Feb 12 20:50:34.181: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031719449s Feb 12 20:50:36.188: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038562478s Feb 12 20:50:38.388: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.238350262s Feb 12 20:50:40.393: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.244096023s STEP: Saw pod success Feb 12 20:50:40.393: INFO: Pod "pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae" satisfied condition "Succeeded or Failed" Feb 12 20:50:40.398: INFO: Trying to get logs from node jerma-node pod pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae container test-container: STEP: delete the pod Feb 12 20:50:40.427: INFO: Waiting for pod pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae to disappear Feb 12 20:50:40.456: INFO: Pod pod-935f0308-59e2-439b-b4ae-b40b5a0e2fae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:40.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1857" for this suite. • [SLOW TEST:8.479 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":48,"skipped":673,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:40.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 12 20:50:40.726: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4590 /api/v1/namespaces/watch-4590/configmaps/e2e-watch-test-resource-version 434b4ad2-7bdb-4041-9dfd-45845b408e95 8014799 0 2020-02-12 20:50:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 12 20:50:40.726: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4590 /api/v1/namespaces/watch-4590/configmaps/e2e-watch-test-resource-version 434b4ad2-7bdb-4041-9dfd-45845b408e95 8014800 0 2020-02-12 20:50:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:40.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4590" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":277,"completed":49,"skipped":675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:40.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Feb 12 20:50:40.936: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:50:54.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3274" for this suite. • [SLOW TEST:14.213 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":277,"completed":50,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:50:54.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:281 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the initial replication controller Feb 12 20:50:55.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7662' Feb 12 20:50:55.459: INFO: stderr: "" Feb 12 20:50:55.459: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 20:50:55.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7662' Feb 12 20:50:55.645: INFO: stderr: "" Feb 12 20:50:55.645: INFO: stdout: "update-demo-nautilus-8fvx5 update-demo-nautilus-fvz8l " Feb 12 20:50:55.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fvx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:50:55.731: INFO: stderr: "" Feb 12 20:50:55.731: INFO: stdout: "" Feb 12 20:50:55.731: INFO: update-demo-nautilus-8fvx5 is created but not running Feb 12 20:51:00.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7662' Feb 12 20:51:00.895: INFO: stderr: "" Feb 12 20:51:00.895: INFO: stdout: "update-demo-nautilus-8fvx5 update-demo-nautilus-fvz8l " Feb 12 20:51:00.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fvx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:02.346: INFO: stderr: "" Feb 12 20:51:02.346: INFO: stdout: "" Feb 12 20:51:02.346: INFO: update-demo-nautilus-8fvx5 is created but not running Feb 12 20:51:07.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7662' Feb 12 20:51:07.601: INFO: stderr: "" Feb 12 20:51:07.601: INFO: stdout: "update-demo-nautilus-8fvx5 update-demo-nautilus-fvz8l " Feb 12 20:51:07.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fvx5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:07.728: INFO: stderr: "" Feb 12 20:51:07.728: INFO: stdout: "true" Feb 12 20:51:07.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8fvx5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:07.873: INFO: stderr: "" Feb 12 20:51:07.874: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 20:51:07.874: INFO: validating pod update-demo-nautilus-8fvx5 Feb 12 20:51:07.883: INFO: got data: { "image": "nautilus.jpg" } Feb 12 20:51:07.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 20:51:07.883: INFO: update-demo-nautilus-8fvx5 is verified up and running Feb 12 20:51:07.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvz8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:07.995: INFO: stderr: "" Feb 12 20:51:07.995: INFO: stdout: "true" Feb 12 20:51:07.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fvz8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:08.265: INFO: stderr: "" Feb 12 20:51:08.265: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 20:51:08.265: INFO: validating pod update-demo-nautilus-fvz8l Feb 12 20:51:08.276: INFO: got data: { "image": "nautilus.jpg" } Feb 12 20:51:08.276: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 20:51:08.276: INFO: update-demo-nautilus-fvz8l is verified up and running STEP: rolling-update to new replication controller Feb 12 20:51:08.279: INFO: scanned /root for discovery docs: Feb 12 20:51:08.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7662' Feb 12 20:51:40.631: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 12 20:51:40.631: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 20:51:40.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7662' Feb 12 20:51:40.767: INFO: stderr: "" Feb 12 20:51:40.767: INFO: stdout: "update-demo-kitten-hfmjq update-demo-kitten-tvs6v " Feb 12 20:51:40.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hfmjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:40.871: INFO: stderr: "" Feb 12 20:51:40.871: INFO: stdout: "true" Feb 12 20:51:40.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hfmjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:40.998: INFO: stderr: "" Feb 12 20:51:40.998: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 12 20:51:40.998: INFO: validating pod update-demo-kitten-hfmjq Feb 12 20:51:41.007: INFO: got data: { "image": "kitten.jpg" } Feb 12 20:51:41.007: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 12 20:51:41.007: INFO: update-demo-kitten-hfmjq is verified up and running Feb 12 20:51:41.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tvs6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:41.123: INFO: stderr: "" Feb 12 20:51:41.123: INFO: stdout: "true" Feb 12 20:51:41.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tvs6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7662' Feb 12 20:51:41.192: INFO: stderr: "" Feb 12 20:51:41.192: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Feb 12 20:51:41.192: INFO: validating pod update-demo-kitten-tvs6v Feb 12 20:51:41.208: INFO: got data: { "image": "kitten.jpg" } Feb 12 20:51:41.208: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Feb 12 20:51:41.208: INFO: update-demo-kitten-tvs6v is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:51:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7662" for this suite. • [SLOW TEST:46.248 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":277,"completed":51,"skipped":783,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:51:41.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 12 20:51:41.462: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Feb 12 20:51:42.121: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 12 20:51:44.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:51:46.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:51:48.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:51:50.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:51:52.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137502, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:51:55.272: INFO: Waited 994.453968ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:51:55.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5232" for this suite. • [SLOW TEST:14.635 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":277,"completed":52,"skipped":791,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:51:55.853: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-3bff9e02-8539-4fa7-9817-675500776ef1 STEP: Creating a pod to test consume configMaps Feb 12 20:51:56.125: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2" in namespace "projected-7421" to be "Succeeded or Failed" Feb 12 20:51:56.150: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Pending", Reason="", readiness=false. Elapsed: 24.796744ms Feb 12 20:51:58.156: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031024013s Feb 12 20:52:00.162: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036942149s Feb 12 20:52:02.167: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041765652s Feb 12 20:52:04.191: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065379471s Feb 12 20:52:06.196: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070243469s STEP: Saw pod success Feb 12 20:52:06.196: INFO: Pod "pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2" satisfied condition "Succeeded or Failed" Feb 12 20:52:06.198: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2 container projected-configmap-volume-test: STEP: delete the pod Feb 12 20:52:06.281: INFO: Waiting for pod pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2 to disappear Feb 12 20:52:06.287: INFO: Pod pod-projected-configmaps-29e76b08-7466-45ea-abdb-4a84bac884e2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:52:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7421" for this suite. • [SLOW TEST:10.444 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":277,"completed":53,"skipped":822,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:52:06.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Feb 12 20:52:06.369: INFO: Waiting up to 5m0s for pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce" in namespace "containers-3224" to be "Succeeded or Failed" Feb 12 20:52:06.373: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868606ms Feb 12 20:52:08.379: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010018636s Feb 12 20:52:10.388: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019222399s Feb 12 20:52:12.403: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033634599s Feb 12 20:52:14.407: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038018428s STEP: Saw pod success Feb 12 20:52:14.407: INFO: Pod "client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce" satisfied condition "Succeeded or Failed" Feb 12 20:52:14.409: INFO: Trying to get logs from node jerma-node pod client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce container test-container: STEP: delete the pod Feb 12 20:52:14.463: INFO: Waiting for pod client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce to disappear Feb 12 20:52:14.473: INFO: Pod client-containers-0ebb2af3-bbf8-483d-8e2d-78a930cf73ce no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:52:14.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3224" for this suite. • [SLOW TEST:8.182 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":277,"completed":54,"skipped":866,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:52:14.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Feb 12 20:52:14.601: INFO: Waiting up to 5m0s for pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be" in namespace "downward-api-6645" to be "Succeeded or Failed" Feb 12 20:52:14.606: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.703916ms Feb 12 20:52:16.613: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01225588s Feb 12 20:52:18.621: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020099656s Feb 12 20:52:20.629: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027852235s Feb 12 20:52:22.652: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05087978s STEP: Saw pod success Feb 12 20:52:22.652: INFO: Pod "downward-api-b53e22af-173b-4005-bd1c-cb18611af6be" satisfied condition "Succeeded or Failed" Feb 12 20:52:22.656: INFO: Trying to get logs from node jerma-node pod downward-api-b53e22af-173b-4005-bd1c-cb18611af6be container dapi-container: STEP: delete the pod Feb 12 20:52:22.738: INFO: Waiting for pod downward-api-b53e22af-173b-4005-bd1c-cb18611af6be to disappear Feb 12 20:52:22.749: INFO: Pod downward-api-b53e22af-173b-4005-bd1c-cb18611af6be no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:52:22.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6645" for this suite. • [SLOW TEST:8.339 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":277,"completed":55,"skipped":869,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:52:22.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-q78p STEP: Creating a pod to test atomic-volume-subpath Feb 12 20:52:23.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-q78p" in namespace "subpath-4372" to be "Succeeded or Failed" Feb 12 20:52:23.049: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Pending", Reason="", readiness=false. Elapsed: 14.985519ms Feb 12 20:52:25.054: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019935627s Feb 12 20:52:27.591: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Pending", Reason="", readiness=false. Elapsed: 4.556442848s Feb 12 20:52:29.597: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Pending", Reason="", readiness=false. Elapsed: 6.562525701s Feb 12 20:52:31.604: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 8.569814576s Feb 12 20:52:33.612: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 10.577382708s Feb 12 20:52:35.619: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 12.584341272s Feb 12 20:52:37.629: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 14.594855477s Feb 12 20:52:39.637: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 16.602874172s Feb 12 20:52:41.644: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 18.609461173s Feb 12 20:52:43.651: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 20.616355352s Feb 12 20:52:45.657: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 22.622243406s Feb 12 20:52:47.664: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 24.629832843s Feb 12 20:52:49.670: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 26.635432259s Feb 12 20:52:51.948: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Running", Reason="", readiness=true. Elapsed: 28.913755516s Feb 12 20:52:53.956: INFO: Pod "pod-subpath-test-configmap-q78p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.921948037s STEP: Saw pod success Feb 12 20:52:53.957: INFO: Pod "pod-subpath-test-configmap-q78p" satisfied condition "Succeeded or Failed" Feb 12 20:52:53.961: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-q78p container test-container-subpath-configmap-q78p: STEP: delete the pod Feb 12 20:52:54.500: INFO: Waiting for pod pod-subpath-test-configmap-q78p to disappear Feb 12 20:52:54.524: INFO: Pod pod-subpath-test-configmap-q78p no longer exists STEP: Deleting pod pod-subpath-test-configmap-q78p Feb 12 20:52:54.524: INFO: Deleting pod "pod-subpath-test-configmap-q78p" in namespace "subpath-4372" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:52:54.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4372" for this suite. • [SLOW TEST:31.719 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":277,"completed":56,"skipped":914,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:52:54.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 12 20:52:54.665: INFO: Waiting up to 5m0s for pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764" in namespace "emptydir-7870" to be "Succeeded or Failed" Feb 12 20:52:54.680: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764": Phase="Pending", Reason="", readiness=false. Elapsed: 14.694231ms Feb 12 20:52:56.688: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02328094s Feb 12 20:52:59.819: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764": Phase="Pending", Reason="", readiness=false. Elapsed: 5.153850937s Feb 12 20:53:01.829: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764": Phase="Pending", Reason="", readiness=false. Elapsed: 7.164169292s Feb 12 20:53:03.835: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.170235127s STEP: Saw pod success Feb 12 20:53:03.835: INFO: Pod "pod-4110fe85-87d8-4175-9043-8aa7556b5764" satisfied condition "Succeeded or Failed" Feb 12 20:53:03.839: INFO: Trying to get logs from node jerma-node pod pod-4110fe85-87d8-4175-9043-8aa7556b5764 container test-container: STEP: delete the pod Feb 12 20:53:03.974: INFO: Waiting for pod pod-4110fe85-87d8-4175-9043-8aa7556b5764 to disappear Feb 12 20:53:03.984: INFO: Pod pod-4110fe85-87d8-4175-9043-8aa7556b5764 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:53:03.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7870" for this suite. • [SLOW TEST:9.453 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":57,"skipped":918,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:53:03.993: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:53:04.081: INFO: Waiting up to 5m0s for pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef" in namespace "downward-api-2447" to be "Succeeded or Failed" Feb 12 20:53:04.131: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef": Phase="Pending", Reason="", readiness=false. Elapsed: 49.549325ms Feb 12 20:53:06.160: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078888644s Feb 12 20:53:08.166: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084769985s Feb 12 20:53:11.012: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.930166589s Feb 12 20:53:13.022: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.940531027s STEP: Saw pod success Feb 12 20:53:13.022: INFO: Pod "downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef" satisfied condition "Succeeded or Failed" Feb 12 20:53:13.026: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef container client-container: STEP: delete the pod Feb 12 20:53:13.070: INFO: Waiting for pod downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef to disappear Feb 12 20:53:13.089: INFO: Pod downwardapi-volume-890a72fc-6b3e-4625-a507-c6f5f7958cef no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:53:13.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2447" for this suite. • [SLOW TEST:9.113 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":277,"completed":58,"skipped":919,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:53:13.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:53:13.230: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Feb 12 20:53:16.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 create -f -' Feb 12 20:53:20.208: INFO: stderr: "" Feb 12 20:53:20.208: INFO: stdout: "e2e-test-crd-publish-openapi-7172-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 12 20:53:20.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 delete e2e-test-crd-publish-openapi-7172-crds test-foo' Feb 12 20:53:20.371: INFO: stderr: "" Feb 12 20:53:20.371: INFO: stdout: "e2e-test-crd-publish-openapi-7172-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Feb 12 20:53:20.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 apply -f -' Feb 12 20:53:20.837: INFO: stderr: "" Feb 12 20:53:20.837: INFO: stdout: "e2e-test-crd-publish-openapi-7172-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Feb 12 20:53:20.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 delete e2e-test-crd-publish-openapi-7172-crds test-foo' Feb 12 20:53:20.967: INFO: stderr: "" Feb 12 20:53:20.967: INFO: stdout: "e2e-test-crd-publish-openapi-7172-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Feb 12 20:53:20.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 create -f -' Feb 12 20:53:21.378: INFO: rc: 1 Feb 12 20:53:21.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 apply -f -' Feb 12 20:53:21.775: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Feb 12 20:53:21.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 create -f -' Feb 12 20:53:22.080: INFO: rc: 1 Feb 12 20:53:22.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2326 apply -f -' Feb 12 20:53:22.633: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Feb 12 20:53:22.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7172-crds' Feb 12 20:53:22.915: INFO: stderr: "" Feb 12 20:53:22.915: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7172-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Feb 12 20:53:22.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7172-crds.metadata' Feb 12 20:53:23.178: INFO: stderr: "" Feb 12 20:53:23.178: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7172-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Feb 12 20:53:23.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7172-crds.spec' Feb 12 20:53:23.462: INFO: stderr: "" Feb 12 20:53:23.462: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7172-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Feb 12 20:53:23.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7172-crds.spec.bars' Feb 12 20:53:23.831: INFO: stderr: "" Feb 12 20:53:23.831: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7172-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Feb 12 20:53:23.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7172-crds.spec.bars2' Feb 12 20:53:24.214: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:53:26.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2326" for this suite. • [SLOW TEST:13.052 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":277,"completed":59,"skipped":929,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:53:26.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Feb 12 20:53:26.243: INFO: >>> kubeConfig: /root/.kube/config Feb 12 20:53:29.750: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:53:41.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9277" for this suite. • [SLOW TEST:14.903 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":277,"completed":60,"skipped":945,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:53:41.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 STEP: creating an pod Feb 12 20:53:41.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4266 -- logs-generator --log-lines-total 100 --run-duration 20s' Feb 12 20:53:41.379: INFO: stderr: "" Feb 12 20:53:41.380: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Feb 12 20:53:41.380: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Feb 12 20:53:41.380: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4266" to be "running and ready, or succeeded" Feb 12 20:53:41.397: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.82602ms Feb 12 20:53:43.404: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024352963s Feb 12 20:53:45.413: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03264645s Feb 12 20:53:47.419: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03946906s Feb 12 20:53:49.428: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.047674551s Feb 12 20:53:49.428: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Feb 12 20:53:49.428: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Feb 12 20:53:49.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266' Feb 12 20:53:49.635: INFO: stderr: "" Feb 12 20:53:49.635: INFO: stdout: "I0212 20:53:47.088836 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/qhx 342\nI0212 20:53:47.289201 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/dqgs 531\nI0212 20:53:47.489134 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/p7zf 421\nI0212 20:53:47.689193 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/dhh 515\nI0212 20:53:47.889574 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/f9r 503\nI0212 20:53:48.089560 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/vj7 398\nI0212 20:53:48.289317 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/vsz8 293\nI0212 20:53:48.489731 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/sqv2 205\nI0212 20:53:48.689292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/rds 233\nI0212 20:53:48.889224 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/fn6 377\nI0212 20:53:49.089138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/pj5 314\nI0212 20:53:49.289227 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/gm9f 393\nI0212 20:53:49.489231 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/gmm 377\n" STEP: limiting log lines Feb 12 20:53:49.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266 --tail=1' Feb 12 20:53:49.742: INFO: stderr: "" Feb 12 20:53:49.742: INFO: stdout: "I0212 20:53:49.689097 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/zm2 424\n" Feb 12 20:53:49.742: INFO: got output "I0212 20:53:49.689097 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/zm2 424\n" STEP: limiting log bytes Feb 12 20:53:49.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266 --limit-bytes=1' Feb 12 20:53:49.868: INFO: stderr: "" Feb 12 20:53:49.868: INFO: stdout: "I" Feb 12 20:53:49.868: INFO: got output "I" STEP: exposing timestamps Feb 12 20:53:49.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266 --tail=1 --timestamps' Feb 12 20:53:50.018: INFO: stderr: "" Feb 12 20:53:50.018: INFO: stdout: "2020-02-12T20:53:49.891643044Z I0212 20:53:49.889173 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/4gsd 503\n" Feb 12 20:53:50.018: INFO: got output "2020-02-12T20:53:49.891643044Z I0212 20:53:49.889173 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/4gsd 503\n" STEP: restricting to a time range Feb 12 20:53:52.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266 --since=1s' Feb 12 20:53:52.680: INFO: stderr: "" Feb 12 20:53:52.680: INFO: stdout: "I0212 20:53:51.689152 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/qvw 404\nI0212 20:53:51.889795 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/k44b 561\nI0212 20:53:52.089198 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/kwhp 432\nI0212 20:53:52.289106 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/6mj4 389\nI0212 20:53:52.489113 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/4cl 406\n" Feb 12 20:53:52.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4266 --since=24h' Feb 12 20:53:52.796: INFO: stderr: "" Feb 12 20:53:52.796: INFO: stdout: "I0212 20:53:47.088836 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/qhx 342\nI0212 20:53:47.289201 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/dqgs 531\nI0212 20:53:47.489134 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/p7zf 421\nI0212 20:53:47.689193 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/dhh 515\nI0212 20:53:47.889574 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/f9r 503\nI0212 20:53:48.089560 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/vj7 398\nI0212 20:53:48.289317 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/vsz8 293\nI0212 20:53:48.489731 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/sqv2 205\nI0212 20:53:48.689292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/rds 233\nI0212 20:53:48.889224 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/fn6 377\nI0212 20:53:49.089138 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/pj5 314\nI0212 20:53:49.289227 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/gm9f 393\nI0212 20:53:49.489231 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/gmm 377\nI0212 20:53:49.689097 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/zm2 424\nI0212 20:53:49.889173 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/4gsd 503\nI0212 20:53:50.089205 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/ftxq 436\nI0212 20:53:50.289039 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/qhv 464\nI0212 20:53:50.489236 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/rt69 303\nI0212 20:53:50.689105 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/56s8 232\nI0212 20:53:50.889275 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/wwf 402\nI0212 20:53:51.089165 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/pk26 251\nI0212 20:53:51.289207 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/lhg 374\nI0212 20:53:51.489107 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/wb7w 444\nI0212 20:53:51.689152 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/qvw 404\nI0212 20:53:51.889795 1 logs_generator.go:76] 24 POST /api/v1/namespaces/ns/pods/k44b 561\nI0212 20:53:52.089198 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/kwhp 432\nI0212 20:53:52.289106 1 logs_generator.go:76] 26 POST /api/v1/namespaces/kube-system/pods/6mj4 389\nI0212 20:53:52.489113 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/4cl 406\nI0212 20:53:52.689254 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/qfpq 259\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1422 Feb 12 20:53:52.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4266' Feb 12 20:54:02.338: INFO: stderr: "" Feb 12 20:54:02.338: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:54:02.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4266" for this suite. • [SLOW TEST:21.289 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1412 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":277,"completed":61,"skipped":956,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:54:02.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-t62d STEP: Creating a pod to test atomic-volume-subpath Feb 12 20:54:02.454: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-t62d" in namespace "subpath-384" to be "Succeeded or Failed" Feb 12 20:54:02.469: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.209193ms Feb 12 20:54:04.476: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022031194s Feb 12 20:54:06.483: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028605442s Feb 12 20:54:08.490: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035757211s Feb 12 20:54:10.508: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 8.053106095s Feb 12 20:54:12.516: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 10.061744193s Feb 12 20:54:14.527: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 12.072824864s Feb 12 20:54:16.534: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 14.079953137s Feb 12 20:54:18.551: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 16.096502203s Feb 12 20:54:20.558: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 18.103996833s Feb 12 20:54:22.568: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 20.11356337s Feb 12 20:54:24.574: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 22.119695771s Feb 12 20:54:26.581: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 24.126928163s Feb 12 20:54:28.594: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Running", Reason="", readiness=true. Elapsed: 26.139126148s Feb 12 20:54:30.603: INFO: Pod "pod-subpath-test-projected-t62d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.148694194s STEP: Saw pod success Feb 12 20:54:30.603: INFO: Pod "pod-subpath-test-projected-t62d" satisfied condition "Succeeded or Failed" Feb 12 20:54:30.608: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-t62d container test-container-subpath-projected-t62d: STEP: delete the pod Feb 12 20:54:30.647: INFO: Waiting for pod pod-subpath-test-projected-t62d to disappear Feb 12 20:54:30.657: INFO: Pod pod-subpath-test-projected-t62d no longer exists STEP: Deleting pod pod-subpath-test-projected-t62d Feb 12 20:54:30.657: INFO: Deleting pod "pod-subpath-test-projected-t62d" in namespace "subpath-384" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:54:30.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-384" for this suite. • [SLOW TEST:28.314 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":277,"completed":62,"skipped":979,"failed":0} SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:54:30.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-e06b38e1-6da1-4f91-b9d9-7065d618b5eb STEP: Creating secret with name secret-projected-all-test-volume-08c4bf99-eacc-447d-b790-7aebd55cb66d STEP: Creating a pod to test Check all projections for projected volume plugin Feb 12 20:54:30.823: INFO: Waiting up to 5m0s for pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51" in namespace "projected-7848" to be "Succeeded or Failed" Feb 12 20:54:30.917: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51": Phase="Pending", Reason="", readiness=false. Elapsed: 93.251107ms Feb 12 20:54:32.923: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100077068s Feb 12 20:54:34.947: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123548882s Feb 12 20:54:36.953: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130058326s Feb 12 20:54:38.957: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.133884105s STEP: Saw pod success Feb 12 20:54:38.957: INFO: Pod "projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51" satisfied condition "Succeeded or Failed" Feb 12 20:54:38.960: INFO: Trying to get logs from node jerma-node pod projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51 container projected-all-volume-test: STEP: delete the pod Feb 12 20:54:38.990: INFO: Waiting for pod projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51 to disappear Feb 12 20:54:39.034: INFO: Pod projected-volume-4e9b6b84-9190-460f-a90f-e2fb2650cf51 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:54:39.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7848" for this suite. • [SLOW TEST:8.376 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":277,"completed":63,"skipped":983,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:54:39.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:54:39.351: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21" in namespace "downward-api-8471" to be "Succeeded or Failed" Feb 12 20:54:39.380: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21": Phase="Pending", Reason="", readiness=false. Elapsed: 29.490058ms Feb 12 20:54:41.389: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038360144s Feb 12 20:54:43.395: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044791449s Feb 12 20:54:45.401: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050814997s Feb 12 20:54:47.411: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059962435s STEP: Saw pod success Feb 12 20:54:47.411: INFO: Pod "downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21" satisfied condition "Succeeded or Failed" Feb 12 20:54:47.415: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21 container client-container: STEP: delete the pod Feb 12 20:54:47.462: INFO: Waiting for pod downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21 to disappear Feb 12 20:54:47.637: INFO: Pod downwardapi-volume-e7531336-1b0d-4615-ac91-c87ea1a7cf21 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:54:47.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8471" for this suite. • [SLOW TEST:8.616 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":277,"completed":64,"skipped":984,"failed":0} [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:54:47.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 12 20:54:47.893: INFO: Waiting up to 5m0s for pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2" in namespace "emptydir-827" to be "Succeeded or Failed" Feb 12 20:54:47.904: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.1091ms Feb 12 20:54:49.914: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020681326s Feb 12 20:54:51.919: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026031572s Feb 12 20:54:53.928: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034907294s Feb 12 20:54:55.933: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040447433s STEP: Saw pod success Feb 12 20:54:55.933: INFO: Pod "pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2" satisfied condition "Succeeded or Failed" Feb 12 20:54:55.937: INFO: Trying to get logs from node jerma-node pod pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2 container test-container: STEP: delete the pod Feb 12 20:54:56.140: INFO: Waiting for pod pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2 to disappear Feb 12 20:54:56.163: INFO: Pod pod-e797f1a6-f079-462a-b1ad-0f242ae4cab2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:54:56.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-827" for this suite. • [SLOW TEST:8.518 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":65,"skipped":984,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:54:56.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Feb 12 20:54:56.294: INFO: Waiting up to 5m0s for pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76" in namespace "downward-api-2210" to be "Succeeded or Failed" Feb 12 20:54:56.301: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.654926ms Feb 12 20:54:58.308: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013646046s Feb 12 20:55:00.316: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021478467s Feb 12 20:55:02.322: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027686525s Feb 12 20:55:04.329: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034789223s STEP: Saw pod success Feb 12 20:55:04.329: INFO: Pod "downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76" satisfied condition "Succeeded or Failed" Feb 12 20:55:04.340: INFO: Trying to get logs from node jerma-node pod downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76 container dapi-container: STEP: delete the pod Feb 12 20:55:04.513: INFO: Waiting for pod downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76 to disappear Feb 12 20:55:04.532: INFO: Pod downward-api-5d933094-9be0-4e63-8e5e-fc996c0a6d76 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:55:04.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2210" for this suite. • [SLOW TEST:8.368 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":277,"completed":66,"skipped":1009,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:55:04.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:55:12.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3912" for this suite. • [SLOW TEST:8.224 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":277,"completed":67,"skipped":1014,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:55:12.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Feb 12 20:55:12.903: INFO: Waiting up to 5m0s for pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc" in namespace "containers-9055" to be "Succeeded or Failed" Feb 12 20:55:12.987: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc": Phase="Pending", Reason="", readiness=false. Elapsed: 84.445225ms Feb 12 20:55:14.992: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089193189s Feb 12 20:55:16.999: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096013849s Feb 12 20:55:19.009: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106426684s Feb 12 20:55:21.015: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112302233s STEP: Saw pod success Feb 12 20:55:21.015: INFO: Pod "client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc" satisfied condition "Succeeded or Failed" Feb 12 20:55:21.020: INFO: Trying to get logs from node jerma-node pod client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc container test-container: STEP: delete the pod Feb 12 20:55:21.072: INFO: Waiting for pod client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc to disappear Feb 12 20:55:21.077: INFO: Pod client-containers-9d0a2078-15c8-41ff-9853-f0c69d4a61dc no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:55:21.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9055" for this suite. • [SLOW TEST:8.319 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":277,"completed":68,"skipped":1030,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:55:21.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:55:21.278: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc" in namespace "downward-api-9253" to be "Succeeded or Failed" Feb 12 20:55:21.288: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.007361ms Feb 12 20:55:23.313: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034433511s Feb 12 20:55:25.321: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042049664s Feb 12 20:55:27.333: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054824823s Feb 12 20:55:29.339: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060977207s STEP: Saw pod success Feb 12 20:55:29.340: INFO: Pod "downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc" satisfied condition "Succeeded or Failed" Feb 12 20:55:29.347: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc container client-container: STEP: delete the pod Feb 12 20:55:29.385: INFO: Waiting for pod downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc to disappear Feb 12 20:55:29.389: INFO: Pod downwardapi-volume-d9916683-bb01-4589-9050-7b1bdd8a1bdc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:55:29.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9253" for this suite. • [SLOW TEST:8.311 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":277,"completed":69,"skipped":1053,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:55:29.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Feb 12 20:55:29.469: INFO: PodSpec: initContainers in spec.initContainers Feb 12 20:56:26.349: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-20962c17-df7b-46e9-b372-4d175195d481", GenerateName:"", Namespace:"init-container-401", SelfLink:"/api/v1/namespaces/init-container-401/pods/pod-init-20962c17-df7b-46e9-b372-4d175195d481", UID:"ec8f6d52-ee7b-47ad-841d-085c0bdd8b69", ResourceVersion:"8016271", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717137729, loc:(*time.Location)(0x7eef300)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"469830678"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vt8sd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00606a2c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vt8sd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vt8sd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vt8sd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005a05e38), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0077fa1e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a05ec0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc005a05ee0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc005a05ee8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc005a05eec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137729, loc:(*time.Location)(0x7eef300)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137729, loc:(*time.Location)(0x7eef300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137729, loc:(*time.Location)(0x7eef300)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137729, loc:(*time.Location)(0x7eef300)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0045505c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bd2850)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001bd28c0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://fb90b199d442341c7829561a43bf733a52933c0ea3c8e616c5fb672659f22f8f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc004550600), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0045505e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc005a05f6f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:26.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-401" for this suite. • [SLOW TEST:56.981 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":277,"completed":70,"skipped":1069,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:26.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1334 STEP: creating the pod Feb 12 20:56:26.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3664' Feb 12 20:56:26.903: INFO: stderr: "" Feb 12 20:56:26.903: INFO: stdout: "pod/pause created\n" Feb 12 20:56:26.903: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 12 20:56:26.903: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3664" to be "running and ready" Feb 12 20:56:26.916: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.119081ms Feb 12 20:56:28.926: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02236984s Feb 12 20:56:30.934: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030718995s Feb 12 20:56:32.941: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036990265s Feb 12 20:56:34.946: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.042826731s Feb 12 20:56:34.947: INFO: Pod "pause" satisfied condition "running and ready" Feb 12 20:56:34.947: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Feb 12 20:56:34.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3664' Feb 12 20:56:35.100: INFO: stderr: "" Feb 12 20:56:35.100: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 12 20:56:35.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3664' Feb 12 20:56:35.231: INFO: stderr: "" Feb 12 20:56:35.231: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 12 20:56:35.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3664' Feb 12 20:56:35.410: INFO: stderr: "" Feb 12 20:56:35.410: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 12 20:56:35.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3664' Feb 12 20:56:35.493: INFO: stderr: "" Feb 12 20:56:35.494: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1341 STEP: using delete to clean up resources Feb 12 20:56:35.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3664' Feb 12 20:56:35.622: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 20:56:35.622: INFO: stdout: "pod \"pause\" force deleted\n" Feb 12 20:56:35.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3664' Feb 12 20:56:35.768: INFO: stderr: "No resources found in kubectl-3664 namespace.\n" Feb 12 20:56:35.768: INFO: stdout: "" Feb 12 20:56:35.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3664 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 20:56:35.859: INFO: stderr: "" Feb 12 20:56:35.859: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:35.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3664" for this suite. • [SLOW TEST:9.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1331 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":277,"completed":71,"skipped":1079,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:35.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-6ab7594a-eced-49b3-be7f-a5cc488052bb STEP: Creating a pod to test consume configMaps Feb 12 20:56:36.111: INFO: Waiting up to 5m0s for pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22" in namespace "configmap-6153" to be "Succeeded or Failed" Feb 12 20:56:36.128: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 16.444711ms Feb 12 20:56:38.134: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022484787s Feb 12 20:56:40.139: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027599672s Feb 12 20:56:42.145: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033851168s Feb 12 20:56:44.152: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04020687s Feb 12 20:56:46.157: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04550914s STEP: Saw pod success Feb 12 20:56:46.157: INFO: Pod "pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22" satisfied condition "Succeeded or Failed" Feb 12 20:56:46.160: INFO: Trying to get logs from node jerma-node pod pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22 container configmap-volume-test: STEP: delete the pod Feb 12 20:56:46.211: INFO: Waiting for pod pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22 to disappear Feb 12 20:56:46.228: INFO: Pod pod-configmaps-92115e53-697d-4e3a-8611-14597d12ed22 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:46.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6153" for this suite. • [SLOW TEST:10.368 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":72,"skipped":1086,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:46.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:46.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2163" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":277,"completed":73,"skipped":1095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:46.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:53.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-768" for this suite. STEP: Destroying namespace "nsdeletetest-8857" for this suite. Feb 12 20:56:53.052: INFO: Namespace nsdeletetest-8857 was already deleted STEP: Destroying namespace "nsdeletetest-6383" for this suite. • [SLOW TEST:6.543 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":277,"completed":74,"skipped":1124,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:53.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Feb 12 20:56:53.146: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:56:53.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6421" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":277,"completed":75,"skipped":1129,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:56:53.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6883.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 20:57:05.434: INFO: DNS probes using dns-6883/dns-test-418d2524-0c1c-41ed-8904-d4354f07b1fb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:57:05.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6883" for this suite. • [SLOW TEST:12.344 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":277,"completed":76,"skipped":1131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:57:05.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-480 STEP: creating replication controller nodeport-test in namespace services-480 I0212 20:57:05.721377 9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-480, replica count: 2 I0212 20:57:08.772060 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:57:11.772330 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:57:14.772792 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 20:57:17.773067 9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 20:57:17.773: INFO: Creating new exec pod Feb 12 20:57:26.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-480 execpod224tz -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Feb 12 20:57:27.222: INFO: stderr: "I0212 20:57:27.054382 2074 log.go:172] (0xc0009c7970) (0xc000a44820) Create stream\nI0212 20:57:27.054663 2074 log.go:172] (0xc0009c7970) (0xc000a44820) Stream added, broadcasting: 1\nI0212 20:57:27.067009 2074 log.go:172] (0xc0009c7970) Reply frame received for 1\nI0212 20:57:27.067073 2074 log.go:172] (0xc0009c7970) (0xc0009a60a0) Create stream\nI0212 20:57:27.067088 2074 log.go:172] (0xc0009c7970) (0xc0009a60a0) Stream added, broadcasting: 3\nI0212 20:57:27.069262 2074 log.go:172] (0xc0009c7970) Reply frame received for 3\nI0212 20:57:27.069301 2074 log.go:172] (0xc0009c7970) (0xc00090e640) Create stream\nI0212 20:57:27.069317 2074 log.go:172] (0xc0009c7970) (0xc00090e640) Stream added, broadcasting: 5\nI0212 20:57:27.074711 2074 log.go:172] (0xc0009c7970) Reply frame received for 5\nI0212 20:57:27.145657 2074 log.go:172] (0xc0009c7970) Data frame received for 5\nI0212 20:57:27.145820 2074 log.go:172] (0xc00090e640) (5) Data frame handling\nI0212 20:57:27.145888 2074 log.go:172] (0xc00090e640) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0212 20:57:27.155957 2074 log.go:172] (0xc0009c7970) Data frame received for 5\nI0212 20:57:27.156002 2074 log.go:172] (0xc00090e640) (5) Data frame handling\nI0212 20:57:27.156016 2074 log.go:172] (0xc00090e640) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0212 20:57:27.212999 2074 log.go:172] (0xc0009c7970) (0xc0009a60a0) Stream removed, broadcasting: 3\nI0212 20:57:27.213866 2074 log.go:172] (0xc0009c7970) Data frame received for 1\nI0212 20:57:27.213983 2074 log.go:172] (0xc0009c7970) (0xc00090e640) Stream removed, broadcasting: 5\nI0212 20:57:27.214073 2074 log.go:172] (0xc000a44820) (1) Data frame handling\nI0212 20:57:27.214143 2074 log.go:172] (0xc000a44820) (1) Data frame sent\nI0212 20:57:27.214156 2074 log.go:172] (0xc0009c7970) (0xc000a44820) Stream removed, broadcasting: 1\nI0212 20:57:27.214169 2074 log.go:172] (0xc0009c7970) Go away received\nI0212 20:57:27.215081 2074 log.go:172] (0xc0009c7970) (0xc000a44820) Stream removed, broadcasting: 1\nI0212 20:57:27.215110 2074 log.go:172] (0xc0009c7970) (0xc0009a60a0) Stream removed, broadcasting: 3\nI0212 20:57:27.215126 2074 log.go:172] (0xc0009c7970) (0xc00090e640) Stream removed, broadcasting: 5\n" Feb 12 20:57:27.222: INFO: stdout: "" Feb 12 20:57:27.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-480 execpod224tz -- /bin/sh -x -c nc -zv -t -w 2 10.96.115.18 80' Feb 12 20:57:27.516: INFO: stderr: "I0212 20:57:27.342809 2093 log.go:172] (0xc000a9eb00) (0xc000ab8320) Create stream\nI0212 20:57:27.342953 2093 log.go:172] (0xc000a9eb00) (0xc000ab8320) Stream added, broadcasting: 1\nI0212 20:57:27.345387 2093 log.go:172] (0xc000a9eb00) Reply frame received for 1\nI0212 20:57:27.345414 2093 log.go:172] (0xc000a9eb00) (0xc000ab83c0) Create stream\nI0212 20:57:27.345420 2093 log.go:172] (0xc000a9eb00) (0xc000ab83c0) Stream added, broadcasting: 3\nI0212 20:57:27.346461 2093 log.go:172] (0xc000a9eb00) Reply frame received for 3\nI0212 20:57:27.346484 2093 log.go:172] (0xc000a9eb00) (0xc000a8e0a0) Create stream\nI0212 20:57:27.346496 2093 log.go:172] (0xc000a9eb00) (0xc000a8e0a0) Stream added, broadcasting: 5\nI0212 20:57:27.347513 2093 log.go:172] (0xc000a9eb00) Reply frame received for 5\nI0212 20:57:27.410898 2093 log.go:172] (0xc000a9eb00) Data frame received for 5\nI0212 20:57:27.410931 2093 log.go:172] (0xc000a8e0a0) (5) Data frame handling\nI0212 20:57:27.410949 2093 log.go:172] (0xc000a8e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.115.18 80\nConnection to 10.96.115.18 80 port [tcp/http] succeeded!\nI0212 20:57:27.503832 2093 log.go:172] (0xc000a9eb00) (0xc000ab83c0) Stream removed, broadcasting: 3\nI0212 20:57:27.504056 2093 log.go:172] (0xc000a9eb00) Data frame received for 1\nI0212 20:57:27.504091 2093 log.go:172] (0xc000ab8320) (1) Data frame handling\nI0212 20:57:27.504130 2093 log.go:172] (0xc000ab8320) (1) Data frame sent\nI0212 20:57:27.504231 2093 log.go:172] (0xc000a9eb00) (0xc000ab8320) Stream removed, broadcasting: 1\nI0212 20:57:27.504357 2093 log.go:172] (0xc000a9eb00) (0xc000a8e0a0) Stream removed, broadcasting: 5\nI0212 20:57:27.504677 2093 log.go:172] (0xc000a9eb00) Go away received\nI0212 20:57:27.505647 2093 log.go:172] (0xc000a9eb00) (0xc000ab8320) Stream removed, broadcasting: 1\nI0212 20:57:27.505663 2093 log.go:172] (0xc000a9eb00) (0xc000ab83c0) Stream removed, broadcasting: 3\nI0212 20:57:27.505675 2093 log.go:172] (0xc000a9eb00) (0xc000a8e0a0) Stream removed, broadcasting: 5\n" Feb 12 20:57:27.516: INFO: stdout: "" Feb 12 20:57:27.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-480 execpod224tz -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30592' Feb 12 20:57:27.808: INFO: stderr: "I0212 20:57:27.644851 2113 log.go:172] (0xc0000f53f0) (0xc000621b80) Create stream\nI0212 20:57:27.645002 2113 log.go:172] (0xc0000f53f0) (0xc000621b80) Stream added, broadcasting: 1\nI0212 20:57:27.648177 2113 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0212 20:57:27.648197 2113 log.go:172] (0xc0000f53f0) (0xc000621d60) Create stream\nI0212 20:57:27.648203 2113 log.go:172] (0xc0000f53f0) (0xc000621d60) Stream added, broadcasting: 3\nI0212 20:57:27.651188 2113 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0212 20:57:27.651206 2113 log.go:172] (0xc0000f53f0) (0xc0008d6000) Create stream\nI0212 20:57:27.651213 2113 log.go:172] (0xc0000f53f0) (0xc0008d6000) Stream added, broadcasting: 5\nI0212 20:57:27.652529 2113 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0212 20:57:27.723879 2113 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0212 20:57:27.724003 2113 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0212 20:57:27.724026 2113 log.go:172] (0xc0008d6000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30592\nI0212 20:57:27.735681 2113 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0212 20:57:27.735700 2113 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0212 20:57:27.735710 2113 log.go:172] (0xc0008d6000) (5) Data frame sent\nConnection to 10.96.2.250 30592 port [tcp/30592] succeeded!\nI0212 20:57:27.800476 2113 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0212 20:57:27.800723 2113 log.go:172] (0xc0000f53f0) (0xc0008d6000) Stream removed, broadcasting: 5\nI0212 20:57:27.800868 2113 log.go:172] (0xc0000f53f0) (0xc000621d60) Stream removed, broadcasting: 3\nI0212 20:57:27.800921 2113 log.go:172] (0xc000621b80) (1) Data frame handling\nI0212 20:57:27.800957 2113 log.go:172] (0xc000621b80) (1) Data frame sent\nI0212 20:57:27.800985 2113 log.go:172] (0xc0000f53f0) (0xc000621b80) Stream removed, broadcasting: 1\nI0212 20:57:27.801008 2113 log.go:172] (0xc0000f53f0) Go away received\nI0212 20:57:27.802356 2113 log.go:172] (0xc0000f53f0) (0xc000621b80) Stream removed, broadcasting: 1\nI0212 20:57:27.802383 2113 log.go:172] (0xc0000f53f0) (0xc000621d60) Stream removed, broadcasting: 3\nI0212 20:57:27.802388 2113 log.go:172] (0xc0000f53f0) (0xc0008d6000) Stream removed, broadcasting: 5\n" Feb 12 20:57:27.808: INFO: stdout: "" Feb 12 20:57:27.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-480 execpod224tz -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30592' Feb 12 20:57:28.398: INFO: stderr: "I0212 20:57:28.082062 2133 log.go:172] (0xc000938dc0) (0xc00099c500) Create stream\nI0212 20:57:28.082392 2133 log.go:172] (0xc000938dc0) (0xc00099c500) Stream added, broadcasting: 1\nI0212 20:57:28.103366 2133 log.go:172] (0xc000938dc0) Reply frame received for 1\nI0212 20:57:28.103485 2133 log.go:172] (0xc000938dc0) (0xc00099c000) Create stream\nI0212 20:57:28.103512 2133 log.go:172] (0xc000938dc0) (0xc00099c000) Stream added, broadcasting: 3\nI0212 20:57:28.106880 2133 log.go:172] (0xc000938dc0) Reply frame received for 3\nI0212 20:57:28.107157 2133 log.go:172] (0xc000938dc0) (0xc000692960) Create stream\nI0212 20:57:28.107228 2133 log.go:172] (0xc000938dc0) (0xc000692960) Stream added, broadcasting: 5\nI0212 20:57:28.109148 2133 log.go:172] (0xc000938dc0) Reply frame received for 5\nI0212 20:57:28.224660 2133 log.go:172] (0xc000938dc0) Data frame received for 5\nI0212 20:57:28.224839 2133 log.go:172] (0xc000692960) (5) Data frame handling\nI0212 20:57:28.224916 2133 log.go:172] (0xc000692960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30592\nI0212 20:57:28.257149 2133 log.go:172] (0xc000938dc0) Data frame received for 5\nI0212 20:57:28.257537 2133 log.go:172] (0xc000692960) (5) Data frame handling\nI0212 20:57:28.257645 2133 log.go:172] (0xc000692960) (5) Data frame sent\nConnection to 10.96.1.234 30592 port [tcp/30592] succeeded!\nI0212 20:57:28.391182 2133 log.go:172] (0xc000938dc0) Data frame received for 1\nI0212 20:57:28.391268 2133 log.go:172] (0xc000938dc0) (0xc00099c000) Stream removed, broadcasting: 3\nI0212 20:57:28.391355 2133 log.go:172] (0xc00099c500) (1) Data frame handling\nI0212 20:57:28.391380 2133 log.go:172] (0xc00099c500) (1) Data frame sent\nI0212 20:57:28.391407 2133 log.go:172] (0xc000938dc0) (0xc000692960) Stream removed, broadcasting: 5\nI0212 20:57:28.391440 2133 log.go:172] (0xc000938dc0) (0xc00099c500) Stream removed, broadcasting: 1\nI0212 20:57:28.391463 2133 log.go:172] (0xc000938dc0) Go away received\nI0212 20:57:28.392139 2133 log.go:172] (0xc000938dc0) (0xc00099c500) Stream removed, broadcasting: 1\nI0212 20:57:28.392155 2133 log.go:172] (0xc000938dc0) (0xc00099c000) Stream removed, broadcasting: 3\nI0212 20:57:28.392171 2133 log.go:172] (0xc000938dc0) (0xc000692960) Stream removed, broadcasting: 5\n" Feb 12 20:57:28.398: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:57:28.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-480" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696 • [SLOW TEST:22.822 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":277,"completed":77,"skipped":1161,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:57:28.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 20:57:29.418: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 20:57:31.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:57:33.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:57:35.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:57:37.444: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:57:39.443: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 20:57:41.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717137849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 20:57:44.492: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:57:45.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4232" for this suite. STEP: Destroying namespace "webhook-4232-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.809 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":277,"completed":78,"skipped":1174,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:57:45.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:57:45.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934" in namespace "projected-6077" to be "Succeeded or Failed" Feb 12 20:57:45.282: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Pending", Reason="", readiness=false. Elapsed: 4.495255ms Feb 12 20:57:47.290: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012523222s Feb 12 20:57:49.295: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017381325s Feb 12 20:57:51.299: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021699476s Feb 12 20:57:53.304: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Pending", Reason="", readiness=false. Elapsed: 8.026816581s Feb 12 20:57:55.308: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.031104589s STEP: Saw pod success Feb 12 20:57:55.308: INFO: Pod "downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934" satisfied condition "Succeeded or Failed" Feb 12 20:57:55.311: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934 container client-container: STEP: delete the pod Feb 12 20:57:55.373: INFO: Waiting for pod downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934 to disappear Feb 12 20:57:55.382: INFO: Pod downwardapi-volume-42c85f91-3b30-4b5c-8eae-b54a245bc934 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:57:55.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6077" for this suite. • [SLOW TEST:10.180 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":277,"completed":79,"skipped":1193,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:57:55.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-866/configmap-test-7c4780e0-b596-4f50-ab6b-aadeb7956099 STEP: Creating a pod to test consume configMaps Feb 12 20:57:55.659: INFO: Waiting up to 5m0s for pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632" in namespace "configmap-866" to be "Succeeded or Failed" Feb 12 20:57:55.682: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632": Phase="Pending", Reason="", readiness=false. Elapsed: 23.545783ms Feb 12 20:57:57.696: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036673526s Feb 12 20:57:59.701: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042068919s Feb 12 20:58:01.708: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048885751s Feb 12 20:58:03.735: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07599267s STEP: Saw pod success Feb 12 20:58:03.735: INFO: Pod "pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632" satisfied condition "Succeeded or Failed" Feb 12 20:58:03.740: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632 container env-test: STEP: delete the pod Feb 12 20:58:03.788: INFO: Waiting for pod pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632 to disappear Feb 12 20:58:03.795: INFO: Pod pod-configmaps-e5955f14-a5f4-441e-975d-4b35ff08c632 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:03.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-866" for this suite. • [SLOW TEST:8.416 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":277,"completed":80,"skipped":1193,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:03.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 20:58:04.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8" in namespace "projected-518" to be "Succeeded or Failed" Feb 12 20:58:04.088: INFO: Pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 47.360048ms Feb 12 20:58:06.095: INFO: Pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054575676s Feb 12 20:58:08.102: INFO: Pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061433141s Feb 12 20:58:10.109: INFO: Pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069014706s STEP: Saw pod success Feb 12 20:58:10.109: INFO: Pod "downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8" satisfied condition "Succeeded or Failed" Feb 12 20:58:10.114: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8 container client-container: STEP: delete the pod Feb 12 20:58:10.290: INFO: Waiting for pod downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8 to disappear Feb 12 20:58:10.295: INFO: Pod downwardapi-volume-247e9b7a-4b51-4052-855e-1781a092e6f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:10.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-518" for this suite. • [SLOW TEST:6.498 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":81,"skipped":1206,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:10.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 12 20:58:24.693: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 20:58:24.700: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 20:58:26.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 20:58:26.705: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 20:58:28.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 20:58:28.708: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 20:58:30.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 20:58:30.705: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:30.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9669" for this suite. • [SLOW TEST:20.416 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":277,"completed":82,"skipped":1208,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:30.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-2949/secret-test-e6a692aa-ca14-40f5-ad03-581fdd3e91bf STEP: Creating a pod to test consume secrets Feb 12 20:58:30.870: INFO: Waiting up to 5m0s for pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe" in namespace "secrets-2949" to be "Succeeded or Failed" Feb 12 20:58:30.879: INFO: Pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282482ms Feb 12 20:58:32.888: INFO: Pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017649495s Feb 12 20:58:34.896: INFO: Pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025914417s Feb 12 20:58:36.905: INFO: Pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035084013s STEP: Saw pod success Feb 12 20:58:36.905: INFO: Pod "pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe" satisfied condition "Succeeded or Failed" Feb 12 20:58:36.910: INFO: Trying to get logs from node jerma-node pod pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe container env-test: STEP: delete the pod Feb 12 20:58:36.956: INFO: Waiting for pod pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe to disappear Feb 12 20:58:36.959: INFO: Pod pod-configmaps-3c6d76e2-f881-4162-a3cc-2819069985fe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:36.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2949" for this suite. • [SLOW TEST:6.249 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":277,"completed":83,"skipped":1214,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:36.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:151 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:37.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1383" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":277,"completed":84,"skipped":1221,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:37.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:48.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8538" for this suite. • [SLOW TEST:11.416 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":277,"completed":85,"skipped":1229,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:48.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 20:58:48.799: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 12 20:58:51.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1744 create -f -' Feb 12 20:58:55.865: INFO: stderr: "" Feb 12 20:58:55.865: INFO: stdout: "e2e-test-crd-publish-openapi-8380-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 12 20:58:55.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1744 delete e2e-test-crd-publish-openapi-8380-crds test-cr' Feb 12 20:58:56.097: INFO: stderr: "" Feb 12 20:58:56.097: INFO: stdout: "e2e-test-crd-publish-openapi-8380-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Feb 12 20:58:56.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1744 apply -f -' Feb 12 20:58:56.653: INFO: stderr: "" Feb 12 20:58:56.653: INFO: stdout: "e2e-test-crd-publish-openapi-8380-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Feb 12 20:58:56.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1744 delete e2e-test-crd-publish-openapi-8380-crds test-cr' Feb 12 20:58:56.788: INFO: stderr: "" Feb 12 20:58:56.788: INFO: stdout: "e2e-test-crd-publish-openapi-8380-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Feb 12 20:58:56.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8380-crds' Feb 12 20:58:57.145: INFO: stderr: "" Feb 12 20:58:57.145: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8380-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:59.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1744" for this suite. • [SLOW TEST:10.364 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":277,"completed":86,"skipped":1229,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:59.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:58:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5833" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":277,"completed":87,"skipped":1240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:58:59.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 12 20:58:59.529: INFO: Number of nodes with available pods: 0 Feb 12 20:58:59.529: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:01.029: INFO: Number of nodes with available pods: 0 Feb 12 20:59:01.029: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:01.670: INFO: Number of nodes with available pods: 0 Feb 12 20:59:01.670: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:02.595: INFO: Number of nodes with available pods: 0 Feb 12 20:59:02.595: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:03.542: INFO: Number of nodes with available pods: 0 Feb 12 20:59:03.542: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:04.574: INFO: Number of nodes with available pods: 0 Feb 12 20:59:04.574: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:05.963: INFO: Number of nodes with available pods: 0 Feb 12 20:59:05.963: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:06.668: INFO: Number of nodes with available pods: 0 Feb 12 20:59:06.668: INFO: Node jerma-node is running more than one daemon pod Feb 12 20:59:07.540: INFO: Number of nodes with available pods: 1 Feb 12 20:59:07.540: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:08.543: INFO: Number of nodes with available pods: 1 Feb 12 20:59:08.543: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:09.540: INFO: Number of nodes with available pods: 2 Feb 12 20:59:09.540: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 12 20:59:09.663: INFO: Number of nodes with available pods: 1 Feb 12 20:59:09.663: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:10.672: INFO: Number of nodes with available pods: 1 Feb 12 20:59:10.672: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:11.676: INFO: Number of nodes with available pods: 1 Feb 12 20:59:11.676: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:12.693: INFO: Number of nodes with available pods: 1 Feb 12 20:59:12.693: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:13.677: INFO: Number of nodes with available pods: 1 Feb 12 20:59:13.677: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:14.674: INFO: Number of nodes with available pods: 1 Feb 12 20:59:14.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:15.674: INFO: Number of nodes with available pods: 1 Feb 12 20:59:15.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:16.679: INFO: Number of nodes with available pods: 1 Feb 12 20:59:16.679: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:17.677: INFO: Number of nodes with available pods: 1 Feb 12 20:59:17.677: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:18.674: INFO: Number of nodes with available pods: 1 Feb 12 20:59:18.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:19.676: INFO: Number of nodes with available pods: 1 Feb 12 20:59:19.676: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:20.672: INFO: Number of nodes with available pods: 1 Feb 12 20:59:20.673: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:21.675: INFO: Number of nodes with available pods: 1 Feb 12 20:59:21.675: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:22.674: INFO: Number of nodes with available pods: 1 Feb 12 20:59:22.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:23.678: INFO: Number of nodes with available pods: 1 Feb 12 20:59:23.678: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:24.857: INFO: Number of nodes with available pods: 1 Feb 12 20:59:24.857: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:25.673: INFO: Number of nodes with available pods: 1 Feb 12 20:59:25.673: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:26.674: INFO: Number of nodes with available pods: 1 Feb 12 20:59:26.674: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:27.880: INFO: Number of nodes with available pods: 1 Feb 12 20:59:27.880: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:28.743: INFO: Number of nodes with available pods: 1 Feb 12 20:59:28.743: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 12 20:59:29.679: INFO: Number of nodes with available pods: 2 Feb 12 20:59:29.679: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4727, will wait for the garbage collector to delete the pods Feb 12 20:59:29.758: INFO: Deleting DaemonSet.extensions daemon-set took: 10.28681ms Feb 12 20:59:30.059: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.406119ms Feb 12 20:59:43.163: INFO: Number of nodes with available pods: 0 Feb 12 20:59:43.163: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 20:59:43.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4727/daemonsets","resourceVersion":"8017288"},"items":null} Feb 12 20:59:43.178: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4727/pods","resourceVersion":"8017288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:59:43.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4727" for this suite. • [SLOW TEST:44.049 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":277,"completed":88,"skipped":1324,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:59:43.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Feb 12 20:59:43.359: INFO: Waiting up to 5m0s for pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536" in namespace "var-expansion-7303" to be "Succeeded or Failed" Feb 12 20:59:43.369: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Pending", Reason="", readiness=false. Elapsed: 9.309987ms Feb 12 20:59:46.199: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.839564308s Feb 12 20:59:48.389: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Pending", Reason="", readiness=false. Elapsed: 5.030062424s Feb 12 20:59:50.395: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Pending", Reason="", readiness=false. Elapsed: 7.035628875s Feb 12 20:59:52.402: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Pending", Reason="", readiness=false. Elapsed: 9.042540315s Feb 12 20:59:54.407: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.047895274s STEP: Saw pod success Feb 12 20:59:54.407: INFO: Pod "var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536" satisfied condition "Succeeded or Failed" Feb 12 20:59:54.411: INFO: Trying to get logs from node jerma-node pod var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536 container dapi-container: STEP: delete the pod Feb 12 20:59:54.457: INFO: Waiting for pod var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536 to disappear Feb 12 20:59:54.501: INFO: Pod var-expansion-45f0eb4c-ab0b-4798-af11-0152497d4536 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 20:59:54.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7303" for this suite. • [SLOW TEST:11.241 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":277,"completed":89,"skipped":1329,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 20:59:54.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Feb 12 20:59:54.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Feb 12 20:59:54.945: INFO: stderr: "" Feb 12 20:59:54.945: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 12 20:59:55.953: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 20:59:55.953: INFO: Found 0 / 1 Feb 12 20:59:56.950: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 20:59:56.950: INFO: Found 0 / 1 Feb 12 20:59:58.011: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 20:59:58.011: INFO: Found 0 / 1 Feb 12 20:59:58.952: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 20:59:58.952: INFO: Found 0 / 1 Feb 12 20:59:59.957: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 20:59:59.957: INFO: Found 0 / 1 Feb 12 21:00:00.952: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:00:00.952: INFO: Found 1 / 1 Feb 12 21:00:00.952: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 12 21:00:00.957: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:00:00.957: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 12 21:00:00.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-r4pn5 --namespace=kubectl-3308 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 12 21:00:01.134: INFO: stderr: "" Feb 12 21:00:01.135: INFO: stdout: "pod/agnhost-master-r4pn5 patched\n" STEP: checking annotations Feb 12 21:00:01.140: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:00:01.140: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:00:01.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3308" for this suite. • [SLOW TEST:6.635 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1491 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":277,"completed":90,"skipped":1334,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:00:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4357.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4357.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4357.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 187.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.187_udp@PTR;check="$$(dig +tcp +noall +answer +search 187.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.187_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4357.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4357.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4357.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4357.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4357.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4357.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 187.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.187_udp@PTR;check="$$(dig +tcp +noall +answer +search 187.243.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.243.187_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 21:00:13.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.680: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.684: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.688: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.733: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.739: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.747: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.752: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:13.804: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:18.813: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.819: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.827: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.892: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.899: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.904: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:18.964: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:23.813: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.817: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.822: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.828: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.902: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.912: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.923: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.931: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:23.963: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:28.816: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.830: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.870: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.877: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.884: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:28.911: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:33.819: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.863: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.877: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.933: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.946: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:33.985: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:38.859: INFO: Unable to read wheezy_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.866: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.871: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.876: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.906: INFO: Unable to read jessie_udp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.920: INFO: Unable to read jessie_tcp@dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.925: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.928: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local from pod dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b: the server could not find the requested resource (get pods dns-test-ebc106de-c35d-458c-8524-f00626cb370b) Feb 12 21:00:38.959: INFO: Lookups using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b failed for: [wheezy_udp@dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@dns-test-service.dns-4357.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_udp@dns-test-service.dns-4357.svc.cluster.local jessie_tcp@dns-test-service.dns-4357.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4357.svc.cluster.local] Feb 12 21:00:44.013: INFO: DNS probes using dns-4357/dns-test-ebc106de-c35d-458c-8524-f00626cb370b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:00:44.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4357" for this suite. • [SLOW TEST:43.265 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":277,"completed":91,"skipped":1344,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:00:44.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 21:00:45.024: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 21:00:47.069: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:00:49.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:00:51.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138045, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 21:00:54.156: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:00:54.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4141" for this suite. STEP: Destroying namespace "webhook-4141-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":277,"completed":92,"skipped":1359,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:00:54.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:01:06.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-161" for this suite. • [SLOW TEST:11.273 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":277,"completed":93,"skipped":1372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:01:06.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 21:01:07.100: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 21:01:09.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:01:11.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:01:13.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138067, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 21:01:16.176: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API Feb 12 21:01:16.225: INFO: Waiting for webhook configuration to be ready... STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:01:16.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9498" for this suite. STEP: Destroying namespace "webhook-9498-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.447 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":277,"completed":94,"skipped":1402,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:01:16.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Feb 12 21:01:16.663: INFO: >>> kubeConfig: /root/.kube/config Feb 12 21:01:19.189: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:01:31.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-339" for this suite. • [SLOW TEST:14.887 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":277,"completed":95,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:01:31.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-893ce40c-9c20-44e1-bdd1-be73153cde84 STEP: Creating a pod to test consume configMaps Feb 12 21:01:31.566: INFO: Waiting up to 5m0s for pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a" in namespace "configmap-1693" to be "Succeeded or Failed" Feb 12 21:01:31.574: INFO: Pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.916785ms Feb 12 21:01:33.580: INFO: Pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014488194s Feb 12 21:01:35.601: INFO: Pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035131863s Feb 12 21:01:37.625: INFO: Pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059265744s STEP: Saw pod success Feb 12 21:01:37.625: INFO: Pod "pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a" satisfied condition "Succeeded or Failed" Feb 12 21:01:37.629: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a container configmap-volume-test: STEP: delete the pod Feb 12 21:01:37.733: INFO: Waiting for pod pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a to disappear Feb 12 21:01:37.738: INFO: Pod pod-configmaps-b34447d2-3f27-4b15-9b3d-759b4490819a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:01:37.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1693" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":277,"completed":96,"skipped":1449,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:01:37.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 12 21:01:42.990: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:01:43.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-774" for this suite. • [SLOW TEST:5.339 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":277,"completed":97,"skipped":1451,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:01:43.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-qll5 STEP: Creating a pod to test atomic-volume-subpath Feb 12 21:01:43.215: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-qll5" in namespace "subpath-3170" to be "Succeeded or Failed" Feb 12 21:01:43.249: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Pending", Reason="", readiness=false. Elapsed: 33.836751ms Feb 12 21:01:45.254: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03883115s Feb 12 21:01:47.265: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050126079s Feb 12 21:01:49.272: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057098189s Feb 12 21:01:51.279: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064008646s Feb 12 21:01:53.285: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 10.070468788s Feb 12 21:01:55.291: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 12.075905718s Feb 12 21:01:57.300: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 14.084683994s Feb 12 21:01:59.341: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 16.126574575s Feb 12 21:02:01.348: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 18.133508436s Feb 12 21:02:03.353: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 20.138672246s Feb 12 21:02:05.428: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 22.213385953s Feb 12 21:02:07.435: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 24.219815781s Feb 12 21:02:09.442: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 26.226855078s Feb 12 21:02:11.449: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Running", Reason="", readiness=true. Elapsed: 28.233989026s Feb 12 21:02:13.453: INFO: Pod "pod-subpath-test-configmap-qll5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.237989638s STEP: Saw pod success Feb 12 21:02:13.453: INFO: Pod "pod-subpath-test-configmap-qll5" satisfied condition "Succeeded or Failed" Feb 12 21:02:13.455: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-qll5 container test-container-subpath-configmap-qll5: STEP: delete the pod Feb 12 21:02:13.516: INFO: Waiting for pod pod-subpath-test-configmap-qll5 to disappear Feb 12 21:02:13.523: INFO: Pod pod-subpath-test-configmap-qll5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-qll5 Feb 12 21:02:13.523: INFO: Deleting pod "pod-subpath-test-configmap-qll5" in namespace "subpath-3170" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:13.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3170" for this suite. • [SLOW TEST:30.444 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":277,"completed":98,"skipped":1468,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:13.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 12 21:02:13.756: INFO: Waiting up to 5m0s for pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249" in namespace "emptydir-7549" to be "Succeeded or Failed" Feb 12 21:02:13.782: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249": Phase="Pending", Reason="", readiness=false. Elapsed: 26.403562ms Feb 12 21:02:15.789: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033282463s Feb 12 21:02:17.796: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039935218s Feb 12 21:02:19.803: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249": Phase="Running", Reason="", readiness=true. Elapsed: 6.046950315s Feb 12 21:02:21.811: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054641977s STEP: Saw pod success Feb 12 21:02:21.811: INFO: Pod "pod-8a378bd0-f71e-4ec7-8283-e4bea810e249" satisfied condition "Succeeded or Failed" Feb 12 21:02:21.815: INFO: Trying to get logs from node jerma-node pod pod-8a378bd0-f71e-4ec7-8283-e4bea810e249 container test-container: STEP: delete the pod Feb 12 21:02:21.997: INFO: Waiting for pod pod-8a378bd0-f71e-4ec7-8283-e4bea810e249 to disappear Feb 12 21:02:22.004: INFO: Pod pod-8a378bd0-f71e-4ec7-8283-e4bea810e249 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:22.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7549" for this suite. • [SLOW TEST:8.485 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":99,"skipped":1470,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:22.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:02:22.169: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Feb 12 21:02:27.175: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 12 21:02:29.191: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 12 21:02:29.284: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-21 /apis/apps/v1/namespaces/deployment-21/deployments/test-cleanup-deployment 2c579fd4-8277-4cf5-a5f4-e402e124d42e 8018125 1 2020-02-12 21:02:29 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004695c48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Feb 12 21:02:29.297: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Feb 12 21:02:29.298: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Feb 12 21:02:29.298: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-21 /apis/apps/v1/namespaces/deployment-21/replicasets/test-cleanup-controller 42d0a981-f667-4d0a-b646-a3e175c9855a 8018126 1 2020-02-12 21:02:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2c579fd4-8277-4cf5-a5f4-e402e124d42e 0xc0040d8147 0xc0040d8148}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0040d81a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Feb 12 21:02:29.318: INFO: Pod "test-cleanup-controller-b7897" is available: &Pod{ObjectMeta:{test-cleanup-controller-b7897 test-cleanup-controller- deployment-21 /api/v1/namespaces/deployment-21/pods/test-cleanup-controller-b7897 b1e4d478-fc9f-4f0e-b4a9-afdf7335cd76 8018120 0 2020-02-12 21:02:22 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 42d0a981-f667-4d0a-b646-a3e175c9855a 0xc0040d8697 0xc0040d8698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-82vkl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-82vkl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-82vkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:02:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:02:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-12 21:02:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 21:02:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://0b7dcea68d04759bf4e6c18b208eb999bc8cb9057dd793eb2be9abdbeeaf4708,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:29.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-21" for this suite. • [SLOW TEST:7.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":277,"completed":100,"skipped":1474,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:29.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 21:02:29.658: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b" in namespace "projected-6097" to be "Succeeded or Failed" Feb 12 21:02:29.675: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.188084ms Feb 12 21:02:31.683: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024761666s Feb 12 21:02:33.690: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031260969s Feb 12 21:02:35.697: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038275375s Feb 12 21:02:37.701: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042634182s Feb 12 21:02:39.706: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.047790561s Feb 12 21:02:41.714: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.055650503s STEP: Saw pod success Feb 12 21:02:41.714: INFO: Pod "downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b" satisfied condition "Succeeded or Failed" Feb 12 21:02:41.718: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b container client-container: STEP: delete the pod Feb 12 21:02:41.754: INFO: Waiting for pod downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b to disappear Feb 12 21:02:41.759: INFO: Pod downwardapi-volume-37f9f3fd-1c52-48a8-b9f8-0f427504538b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:41.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6097" for this suite. • [SLOW TEST:12.313 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":101,"skipped":1479,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:41.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:48.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8915" for this suite. • [SLOW TEST:6.381 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":277,"completed":102,"skipped":1481,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:48.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 12 21:02:48.313: INFO: Waiting up to 5m0s for pod "pod-69d35933-1ad6-4123-8fa6-776147957585" in namespace "emptydir-8129" to be "Succeeded or Failed" Feb 12 21:02:48.387: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Pending", Reason="", readiness=false. Elapsed: 74.169093ms Feb 12 21:02:50.394: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080840003s Feb 12 21:02:52.406: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093062192s Feb 12 21:02:54.410: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096507084s Feb 12 21:02:56.416: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103021334s Feb 12 21:02:58.426: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.112399279s STEP: Saw pod success Feb 12 21:02:58.426: INFO: Pod "pod-69d35933-1ad6-4123-8fa6-776147957585" satisfied condition "Succeeded or Failed" Feb 12 21:02:58.430: INFO: Trying to get logs from node jerma-node pod pod-69d35933-1ad6-4123-8fa6-776147957585 container test-container: STEP: delete the pod Feb 12 21:02:58.724: INFO: Waiting for pod pod-69d35933-1ad6-4123-8fa6-776147957585 to disappear Feb 12 21:02:58.761: INFO: Pod pod-69d35933-1ad6-4123-8fa6-776147957585 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:02:58.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8129" for this suite. • [SLOW TEST:10.614 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":103,"skipped":1493,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:02:58.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 21:02:58.856: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a" in namespace "projected-6474" to be "Succeeded or Failed" Feb 12 21:02:58.861: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.963355ms Feb 12 21:03:00.868: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011882142s Feb 12 21:03:02.908: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051565394s Feb 12 21:03:04.914: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057941944s Feb 12 21:03:06.922: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065822533s STEP: Saw pod success Feb 12 21:03:06.922: INFO: Pod "downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a" satisfied condition "Succeeded or Failed" Feb 12 21:03:06.926: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a container client-container: STEP: delete the pod Feb 12 21:03:06.972: INFO: Waiting for pod downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a to disappear Feb 12 21:03:06.981: INFO: Pod downwardapi-volume-39db14cd-8209-45a4-b7ec-d8c62c78648a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:03:06.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6474" for this suite. • [SLOW TEST:8.220 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":277,"completed":104,"skipped":1504,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:03:06.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:03:07.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9633" for this suite. STEP: Destroying namespace "nspatchtest-4b7cb076-0503-4d4c-b62d-7dfcaae52910-8463" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":277,"completed":105,"skipped":1524,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:03:07.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:03:07.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Feb 12 21:03:08.032: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:07Z generation:1 name:name1 resourceVersion:8018355 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c6963c0-b92f-45b1-a392-ec5cb1d4f862] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Feb 12 21:03:18.038: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:18Z generation:1 name:name2 resourceVersion:8018397 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2df6a8a4-6434-4fe3-9823-cc5c9c226b44] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Feb 12 21:03:28.049: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:07Z generation:2 name:name1 resourceVersion:8018421 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c6963c0-b92f-45b1-a392-ec5cb1d4f862] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Feb 12 21:03:38.055: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:18Z generation:2 name:name2 resourceVersion:8018445 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2df6a8a4-6434-4fe3-9823-cc5c9c226b44] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Feb 12 21:03:48.063: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:07Z generation:2 name:name1 resourceVersion:8018469 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c6963c0-b92f-45b1-a392-ec5cb1d4f862] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Feb 12 21:03:58.078: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-12T21:03:18Z generation:2 name:name2 resourceVersion:8018491 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:2df6a8a4-6434-4fe3-9823-cc5c9c226b44] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:04:08.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-245" for this suite. • [SLOW TEST:61.442 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":277,"completed":106,"skipped":1544,"failed":0} [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:04:08.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0212 21:04:11.604139 9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 21:04:11.604: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:04:11.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-635" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":277,"completed":107,"skipped":1544,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:04:11.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7853 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 21:04:15.031: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Feb 12 21:04:15.346: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:17.436: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:20.870: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:21.383: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:23.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:25.622: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:28.589: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:29.358: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Feb 12 21:04:31.352: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:33.353: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:35.352: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:37.354: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:39.387: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:41.366: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:43.351: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:45.353: INFO: The status of Pod netserver-0 is Running (Ready = false) Feb 12 21:04:47.354: INFO: The status of Pod netserver-0 is Running (Ready = true) Feb 12 21:04:47.387: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Feb 12 21:04:55.471: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.2 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7853 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 21:04:55.471: INFO: >>> kubeConfig: /root/.kube/config I0212 21:04:55.515581 9 log.go:172] (0xc005f26790) (0xc000e8fa40) Create stream I0212 21:04:55.515623 9 log.go:172] (0xc005f26790) (0xc000e8fa40) Stream added, broadcasting: 1 I0212 21:04:55.519524 9 log.go:172] (0xc005f26790) Reply frame received for 1 I0212 21:04:55.519557 9 log.go:172] (0xc005f26790) (0xc000e8ff40) Create stream I0212 21:04:55.519565 9 log.go:172] (0xc005f26790) (0xc000e8ff40) Stream added, broadcasting: 3 I0212 21:04:55.520989 9 log.go:172] (0xc005f26790) Reply frame received for 3 I0212 21:04:55.521018 9 log.go:172] (0xc005f26790) (0xc00129fcc0) Create stream I0212 21:04:55.521028 9 log.go:172] (0xc005f26790) (0xc00129fcc0) Stream added, broadcasting: 5 I0212 21:04:55.522299 9 log.go:172] (0xc005f26790) Reply frame received for 5 I0212 21:04:56.617188 9 log.go:172] (0xc005f26790) Data frame received for 3 I0212 21:04:56.617258 9 log.go:172] (0xc000e8ff40) (3) Data frame handling I0212 21:04:56.617288 9 log.go:172] (0xc000e8ff40) (3) Data frame sent I0212 21:04:56.720874 9 log.go:172] (0xc005f26790) (0xc000e8ff40) Stream removed, broadcasting: 3 I0212 21:04:56.720978 9 log.go:172] (0xc005f26790) Data frame received for 1 I0212 21:04:56.720994 9 log.go:172] (0xc000e8fa40) (1) Data frame handling I0212 21:04:56.721008 9 log.go:172] (0xc000e8fa40) (1) Data frame sent I0212 21:04:56.721021 9 log.go:172] (0xc005f26790) (0xc000e8fa40) Stream removed, broadcasting: 1 I0212 21:04:56.721119 9 log.go:172] (0xc005f26790) (0xc00129fcc0) Stream removed, broadcasting: 5 I0212 21:04:56.721157 9 log.go:172] (0xc005f26790) (0xc000e8fa40) Stream removed, broadcasting: 1 I0212 21:04:56.721169 9 log.go:172] (0xc005f26790) (0xc000e8ff40) Stream removed, broadcasting: 3 I0212 21:04:56.721182 9 log.go:172] (0xc005f26790) (0xc00129fcc0) Stream removed, broadcasting: 5 I0212 21:04:56.721326 9 log.go:172] (0xc005f26790) Go away received Feb 12 21:04:56.721: INFO: Found all expected endpoints: [netserver-0] Feb 12 21:04:56.725: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7853 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 21:04:56.725: INFO: >>> kubeConfig: /root/.kube/config I0212 21:04:56.775299 9 log.go:172] (0xc002c5c840) (0xc000a7c8c0) Create stream I0212 21:04:56.775377 9 log.go:172] (0xc002c5c840) (0xc000a7c8c0) Stream added, broadcasting: 1 I0212 21:04:56.780282 9 log.go:172] (0xc002c5c840) Reply frame received for 1 I0212 21:04:56.780318 9 log.go:172] (0xc002c5c840) (0xc000400780) Create stream I0212 21:04:56.780328 9 log.go:172] (0xc002c5c840) (0xc000400780) Stream added, broadcasting: 3 I0212 21:04:56.781987 9 log.go:172] (0xc002c5c840) Reply frame received for 3 I0212 21:04:56.782007 9 log.go:172] (0xc002c5c840) (0xc001250640) Create stream I0212 21:04:56.782016 9 log.go:172] (0xc002c5c840) (0xc001250640) Stream added, broadcasting: 5 I0212 21:04:56.784590 9 log.go:172] (0xc002c5c840) Reply frame received for 5 I0212 21:04:57.864297 9 log.go:172] (0xc002c5c840) Data frame received for 3 I0212 21:04:57.864473 9 log.go:172] (0xc000400780) (3) Data frame handling I0212 21:04:57.864502 9 log.go:172] (0xc000400780) (3) Data frame sent I0212 21:04:57.933967 9 log.go:172] (0xc002c5c840) Data frame received for 1 I0212 21:04:57.934007 9 log.go:172] (0xc000a7c8c0) (1) Data frame handling I0212 21:04:57.934019 9 log.go:172] (0xc000a7c8c0) (1) Data frame sent I0212 21:04:57.934576 9 log.go:172] (0xc002c5c840) (0xc000a7c8c0) Stream removed, broadcasting: 1 I0212 21:04:57.935328 9 log.go:172] (0xc002c5c840) (0xc000400780) Stream removed, broadcasting: 3 I0212 21:04:57.935487 9 log.go:172] (0xc002c5c840) (0xc001250640) Stream removed, broadcasting: 5 I0212 21:04:57.935534 9 log.go:172] (0xc002c5c840) (0xc000a7c8c0) Stream removed, broadcasting: 1 I0212 21:04:57.935566 9 log.go:172] (0xc002c5c840) (0xc000400780) Stream removed, broadcasting: 3 I0212 21:04:57.935660 9 log.go:172] (0xc002c5c840) (0xc001250640) Stream removed, broadcasting: 5 Feb 12 21:04:57.936: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:04:57.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7853" for this suite. • [SLOW TEST:46.331 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":108,"skipped":1545,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:04:57.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:04:58.037: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:05:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5904" for this suite. • [SLOW TEST:8.784 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":277,"completed":109,"skipped":1557,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:05:06.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0212 21:05:51.134076 9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 21:05:51.134: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:05:51.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3475" for this suite. • [SLOW TEST:44.407 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":277,"completed":110,"skipped":1563,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:05:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-5844 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5844 STEP: creating replication controller externalsvc in namespace services-5844 I0212 21:05:51.662822 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5844, replica count: 2 I0212 21:05:54.713617 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 21:05:57.714203 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 21:06:00.714655 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 21:06:03.715596 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 12 21:06:04.580: INFO: Creating new exec pod Feb 12 21:06:18.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5844 execpod7d47f -- /bin/sh -x -c nslookup nodeport-service' Feb 12 21:06:19.151: INFO: stderr: "I0212 21:06:18.905207 2301 log.go:172] (0xc0008b80b0) (0xc0007a8140) Create stream\nI0212 21:06:18.905484 2301 log.go:172] (0xc0008b80b0) (0xc0007a8140) Stream added, broadcasting: 1\nI0212 21:06:18.910538 2301 log.go:172] (0xc0008b80b0) Reply frame received for 1\nI0212 21:06:18.910646 2301 log.go:172] (0xc0008b80b0) (0xc0007ee000) Create stream\nI0212 21:06:18.910671 2301 log.go:172] (0xc0008b80b0) (0xc0007ee000) Stream added, broadcasting: 3\nI0212 21:06:18.913397 2301 log.go:172] (0xc0008b80b0) Reply frame received for 3\nI0212 21:06:18.913442 2301 log.go:172] (0xc0008b80b0) (0xc0007a81e0) Create stream\nI0212 21:06:18.913490 2301 log.go:172] (0xc0008b80b0) (0xc0007a81e0) Stream added, broadcasting: 5\nI0212 21:06:18.915096 2301 log.go:172] (0xc0008b80b0) Reply frame received for 5\nI0212 21:06:19.024451 2301 log.go:172] (0xc0008b80b0) Data frame received for 5\nI0212 21:06:19.024519 2301 log.go:172] (0xc0007a81e0) (5) Data frame handling\nI0212 21:06:19.024568 2301 log.go:172] (0xc0007a81e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0212 21:06:19.047809 2301 log.go:172] (0xc0008b80b0) Data frame received for 3\nI0212 21:06:19.047863 2301 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0212 21:06:19.047887 2301 log.go:172] (0xc0007ee000) (3) Data frame sent\nI0212 21:06:19.056552 2301 log.go:172] (0xc0008b80b0) Data frame received for 3\nI0212 21:06:19.056666 2301 log.go:172] (0xc0007ee000) (3) Data frame handling\nI0212 21:06:19.056741 2301 log.go:172] (0xc0007ee000) (3) Data frame sent\nI0212 21:06:19.132079 2301 log.go:172] (0xc0008b80b0) Data frame received for 1\nI0212 21:06:19.132292 2301 log.go:172] (0xc0008b80b0) (0xc0007ee000) Stream removed, broadcasting: 3\nI0212 21:06:19.132500 2301 log.go:172] (0xc0007a8140) (1) Data frame handling\nI0212 21:06:19.132544 2301 log.go:172] (0xc0007a8140) (1) Data frame sent\nI0212 21:06:19.132561 2301 log.go:172] (0xc0008b80b0) (0xc0007a8140) Stream removed, broadcasting: 1\nI0212 21:06:19.133779 2301 log.go:172] (0xc0008b80b0) (0xc0007a81e0) Stream removed, broadcasting: 5\nI0212 21:06:19.133892 2301 log.go:172] (0xc0008b80b0) (0xc0007a8140) Stream removed, broadcasting: 1\nI0212 21:06:19.133914 2301 log.go:172] (0xc0008b80b0) (0xc0007ee000) Stream removed, broadcasting: 3\nI0212 21:06:19.133941 2301 log.go:172] (0xc0008b80b0) (0xc0007a81e0) Stream removed, broadcasting: 5\nI0212 21:06:19.134280 2301 log.go:172] (0xc0008b80b0) Go away received\n" Feb 12 21:06:19.151: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-5844.svc.cluster.local\tcanonical name = externalsvc.services-5844.svc.cluster.local.\nName:\texternalsvc.services-5844.svc.cluster.local\nAddress: 10.96.162.49\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5844, will wait for the garbage collector to delete the pods Feb 12 21:06:19.213: INFO: Deleting ReplicationController externalsvc took: 7.090537ms Feb 12 21:06:19.514: INFO: Terminating ReplicationController externalsvc pods took: 300.363687ms Feb 12 21:06:33.375: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:06:33.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5844" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696 • [SLOW TEST:42.314 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":277,"completed":111,"skipped":1570,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:06:33.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Feb 12 21:06:42.217: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2088 pod-service-account-464483d3-f7e9-4efe-b7fd-0fd61a601cfc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 12 21:06:42.555: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2088 pod-service-account-464483d3-f7e9-4efe-b7fd-0fd61a601cfc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 12 21:06:42.957: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2088 pod-service-account-464483d3-f7e9-4efe-b7fd-0fd61a601cfc -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:06:43.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2088" for this suite. • [SLOW TEST:9.850 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":277,"completed":112,"skipped":1585,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:06:43.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1582 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 12 21:06:43.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3185' Feb 12 21:06:43.533: INFO: stderr: "" Feb 12 21:06:43.533: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1587 Feb 12 21:06:43.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3185' Feb 12 21:06:52.432: INFO: stderr: "" Feb 12 21:06:52.432: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:06:52.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3185" for this suite. • [SLOW TEST:9.257 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1578 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":277,"completed":113,"skipped":1588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:06:52.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 12 21:07:01.859: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:01.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8385" for this suite. • [SLOW TEST:9.377 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":114,"skipped":1665,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:01.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Feb 12 21:07:02.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8" in namespace "downward-api-3888" to be "Succeeded or Failed" Feb 12 21:07:02.161: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 24.299514ms Feb 12 21:07:04.165: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028179909s Feb 12 21:07:06.174: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037231217s Feb 12 21:07:08.183: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045911789s Feb 12 21:07:10.201: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064250801s STEP: Saw pod success Feb 12 21:07:10.201: INFO: Pod "downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8" satisfied condition "Succeeded or Failed" Feb 12 21:07:10.205: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8 container client-container: STEP: delete the pod Feb 12 21:07:10.290: INFO: Waiting for pod downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8 to disappear Feb 12 21:07:10.340: INFO: Pod downwardapi-volume-6665ddba-69aa-4ca8-b91e-f512cc46a2d8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:10.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3888" for this suite. • [SLOW TEST:8.406 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":277,"completed":115,"skipped":1668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:10.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:10.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7877" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":277,"completed":116,"skipped":1712,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:10.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 12 21:07:11.570: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 12 21:07:13.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:07:15.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 21:07:17.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138431, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 12 21:07:20.630: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 12 21:07:26.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-5426 to-be-attached-pod -i -c=container1' Feb 12 21:07:26.892: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:26.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5426" for this suite. STEP: Destroying namespace "webhook-5426-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.351 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":277,"completed":117,"skipped":1715,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:27.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Feb 12 21:07:27.134: INFO: namespace kubectl-8458 Feb 12 21:07:27.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8458' Feb 12 21:07:27.667: INFO: stderr: "" Feb 12 21:07:27.667: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Feb 12 21:07:28.759: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:28.759: INFO: Found 0 / 1 Feb 12 21:07:29.673: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:29.673: INFO: Found 0 / 1 Feb 12 21:07:30.683: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:30.683: INFO: Found 0 / 1 Feb 12 21:07:32.272: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:32.272: INFO: Found 0 / 1 Feb 12 21:07:32.770: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:32.771: INFO: Found 0 / 1 Feb 12 21:07:33.680: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:33.680: INFO: Found 0 / 1 Feb 12 21:07:34.676: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:34.676: INFO: Found 0 / 1 Feb 12 21:07:35.674: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:35.674: INFO: Found 0 / 1 Feb 12 21:07:37.013: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:37.014: INFO: Found 1 / 1 Feb 12 21:07:37.014: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 12 21:07:37.020: INFO: Selector matched 1 pods for map[app:agnhost] Feb 12 21:07:37.020: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 12 21:07:37.020: INFO: wait on agnhost-master startup in kubectl-8458 Feb 12 21:07:37.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-85ql5 agnhost-master --namespace=kubectl-8458' Feb 12 21:07:37.218: INFO: stderr: "" Feb 12 21:07:37.218: INFO: stdout: "Paused\n" STEP: exposing RC Feb 12 21:07:37.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8458' Feb 12 21:07:37.421: INFO: stderr: "" Feb 12 21:07:37.421: INFO: stdout: "service/rm2 exposed\n" Feb 12 21:07:37.550: INFO: Service rm2 in namespace kubectl-8458 found. STEP: exposing service Feb 12 21:07:39.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8458' Feb 12 21:07:39.816: INFO: stderr: "" Feb 12 21:07:39.816: INFO: stdout: "service/rm3 exposed\n" Feb 12 21:07:39.820: INFO: Service rm3 in namespace kubectl-8458 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:41.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8458" for this suite. • [SLOW TEST:14.805 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1247 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":277,"completed":118,"skipped":1718,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:41.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token Feb 12 21:07:42.489: INFO: created pod pod-service-account-defaultsa Feb 12 21:07:42.489: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 12 21:07:42.531: INFO: created pod pod-service-account-mountsa Feb 12 21:07:42.531: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 12 21:07:42.556: INFO: created pod pod-service-account-nomountsa Feb 12 21:07:42.556: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 12 21:07:42.584: INFO: created pod pod-service-account-defaultsa-mountspec Feb 12 21:07:42.584: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 12 21:07:42.603: INFO: created pod pod-service-account-mountsa-mountspec Feb 12 21:07:42.603: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 12 21:07:42.728: INFO: created pod pod-service-account-nomountsa-mountspec Feb 12 21:07:42.728: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 12 21:07:42.811: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 12 21:07:42.811: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 12 21:07:42.906: INFO: created pod pod-service-account-mountsa-nomountspec Feb 12 21:07:42.906: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 12 21:07:42.968: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 12 21:07:42.968: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:42.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5651" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":277,"completed":119,"skipped":1722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:44.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:07:46.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 12 21:07:46.833: INFO: stderr: "" Feb 12 21:07:46.833: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.4.4+6541758fd4d9fc\", GitCommit:\"6541758fd4d9fc375839a484a7e03c189b05ce3d\", GitTreeState:\"clean\", BuildDate:\"2020-02-12T20:03:09Z\", GoVersion:\"go1.13.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:07:46.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7608" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":277,"completed":120,"skipped":1761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:07:46.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0212 21:08:01.230330 9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 21:08:01.230: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:01.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9719" for this suite. • [SLOW TEST:19.136 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":277,"completed":121,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:05.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:08:09.132: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f" in namespace "security-context-test-8619" to be "Succeeded or Failed" Feb 12 21:08:09.344: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Pending", Reason="", readiness=false. Elapsed: 211.867289ms Feb 12 21:08:11.482: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.350102788s Feb 12 21:08:13.498: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.366070632s Feb 12 21:08:15.514: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.382114341s Feb 12 21:08:17.579: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.446790444s Feb 12 21:08:19.587: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.45469421s Feb 12 21:08:19.587: INFO: Pod "busybox-readonly-false-84e186fa-b73b-48bd-b29c-1b125b31cc4f" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8619" for this suite. • [SLOW TEST:13.602 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":277,"completed":122,"skipped":1896,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:19.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:08:19.759: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 12 21:08:22.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-525 create -f -' Feb 12 21:08:26.666: INFO: stderr: "" Feb 12 21:08:26.666: INFO: stdout: "e2e-test-crd-publish-openapi-9838-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 12 21:08:26.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-525 delete e2e-test-crd-publish-openapi-9838-crds test-cr' Feb 12 21:08:26.784: INFO: stderr: "" Feb 12 21:08:26.784: INFO: stdout: "e2e-test-crd-publish-openapi-9838-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 12 21:08:26.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-525 apply -f -' Feb 12 21:08:27.256: INFO: stderr: "" Feb 12 21:08:27.256: INFO: stdout: "e2e-test-crd-publish-openapi-9838-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 12 21:08:27.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-525 delete e2e-test-crd-publish-openapi-9838-crds test-cr' Feb 12 21:08:27.370: INFO: stderr: "" Feb 12 21:08:27.370: INFO: stdout: "e2e-test-crd-publish-openapi-9838-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 12 21:08:27.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9838-crds' Feb 12 21:08:27.679: INFO: stderr: "" Feb 12 21:08:27.679: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9838-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:30.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-525" for this suite. • [SLOW TEST:11.121 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":277,"completed":123,"skipped":1896,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:30.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Feb 12 21:08:30.904: INFO: Waiting up to 5m0s for pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07" in namespace "var-expansion-5869" to be "Succeeded or Failed" Feb 12 21:08:30.914: INFO: Pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.003112ms Feb 12 21:08:32.920: INFO: Pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015394332s Feb 12 21:08:34.925: INFO: Pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020759029s Feb 12 21:08:36.931: INFO: Pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027139159s STEP: Saw pod success Feb 12 21:08:36.931: INFO: Pod "var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07" satisfied condition "Succeeded or Failed" Feb 12 21:08:36.934: INFO: Trying to get logs from node jerma-node pod var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07 container dapi-container: STEP: delete the pod Feb 12 21:08:36.973: INFO: Waiting for pod var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07 to disappear Feb 12 21:08:36.983: INFO: Pod var-expansion-bf347a13-956f-4c1c-8f15-003aba3a7b07 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:36.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5869" for this suite. • [SLOW TEST:6.345 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":277,"completed":124,"skipped":1946,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:37.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-604519cb-3337-41e0-bd1c-195cb28fe30e STEP: Creating a pod to test consume configMaps Feb 12 21:08:37.214: INFO: Waiting up to 5m0s for pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee" in namespace "configmap-3435" to be "Succeeded or Failed" Feb 12 21:08:37.289: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 75.598161ms Feb 12 21:08:39.296: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081954813s Feb 12 21:08:41.302: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087890103s Feb 12 21:08:43.341: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.126892336s Feb 12 21:08:45.350: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.135975378s STEP: Saw pod success Feb 12 21:08:45.350: INFO: Pod "pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee" satisfied condition "Succeeded or Failed" Feb 12 21:08:45.353: INFO: Trying to get logs from node jerma-node pod pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee container configmap-volume-test: STEP: delete the pod Feb 12 21:08:45.394: INFO: Waiting for pod pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee to disappear Feb 12 21:08:45.430: INFO: Pod pod-configmaps-229f6600-3d5a-4fa0-a12f-a31483daa1ee no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:45.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3435" for this suite. • [SLOW TEST:8.372 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":277,"completed":125,"skipped":1954,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:45.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:51.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1840" for this suite. • [SLOW TEST:6.177 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":126,"skipped":1976,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:51.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:08:51.715: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:08:55.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2391" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":277,"completed":127,"skipped":1995,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:08:55.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3834.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3834.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3834.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3834.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3834.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3834.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 21:09:07.791: INFO: DNS probes using dns-3834/dns-test-c817506e-ba00-4446-83bf-7f84bd5300d7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:09:10.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3834" for this suite. • [SLOW TEST:15.887 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":277,"completed":128,"skipped":2019,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:09:11.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:09:19.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6544" for this suite. • [SLOW TEST:8.352 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":129,"skipped":2021,"failed":0} SSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:09:19.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:09:19.911: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c" in namespace "security-context-test-9475" to be "Succeeded or Failed" Feb 12 21:09:19.933: INFO: Pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.386652ms Feb 12 21:09:21.937: INFO: Pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026400503s Feb 12 21:09:23.943: INFO: Pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031969868s Feb 12 21:09:25.953: INFO: Pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042549958s Feb 12 21:09:25.953: INFO: Pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c" satisfied condition "Succeeded or Failed" Feb 12 21:09:25.973: INFO: Got logs for pod "busybox-privileged-false-1e43368a-5d07-43f2-9cce-160a40f7426c": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:09:25.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9475" for this suite. • [SLOW TEST:6.214 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":130,"skipped":2024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:09:25.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2389 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2389 to expose endpoints map[] Feb 12 21:09:26.369: INFO: Get endpoints failed (45.767431ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Feb 12 21:09:27.375: INFO: successfully validated that service endpoint-test2 in namespace services-2389 exposes endpoints map[] (1.051989861s elapsed) STEP: Creating pod pod1 in namespace services-2389 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2389 to expose endpoints map[pod1:[80]] Feb 12 21:09:31.499: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.114280999s elapsed, will retry) Feb 12 21:09:34.567: INFO: successfully validated that service endpoint-test2 in namespace services-2389 exposes endpoints map[pod1:[80]] (7.182060578s elapsed) STEP: Creating pod pod2 in namespace services-2389 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2389 to expose endpoints map[pod1:[80] pod2:[80]] Feb 12 21:09:39.261: INFO: Unexpected endpoints: found map[2278bfda-da4f-41af-b443-d2b4c06c3cbe:[80]], expected map[pod1:[80] pod2:[80]] (4.686351099s elapsed, will retry) Feb 12 21:09:44.089: INFO: successfully validated that service endpoint-test2 in namespace services-2389 exposes endpoints map[pod1:[80] pod2:[80]] (9.514360628s elapsed) STEP: Deleting pod pod1 in namespace services-2389 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2389 to expose endpoints map[pod2:[80]] Feb 12 21:09:44.179: INFO: successfully validated that service endpoint-test2 in namespace services-2389 exposes endpoints map[pod2:[80]] (82.74604ms elapsed) STEP: Deleting pod pod2 in namespace services-2389 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2389 to expose endpoints map[] Feb 12 21:09:44.208: INFO: successfully validated that service endpoint-test2 in namespace services-2389 exposes endpoints map[] (18.62791ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:09:44.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2389" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696 • [SLOW TEST:18.353 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":277,"completed":131,"skipped":2076,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:09:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 12 21:09:44.413: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 12 21:09:44.478: INFO: Waiting for terminating namespaces to be deleted... Feb 12 21:09:44.481: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 12 21:09:44.490: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.490: INFO: Container kube-proxy ready: true, restart count 0 Feb 12 21:09:44.490: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 12 21:09:44.490: INFO: Container weave ready: true, restart count 1 Feb 12 21:09:44.490: INFO: Container weave-npc ready: true, restart count 0 Feb 12 21:09:44.490: INFO: busybox-readonly-fs521d15d3-3b70-4949-be0b-b76629ff487a from kubelet-test-6544 started at 2020-02-12 21:09:11 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.490: INFO: Container busybox-readonly-fs521d15d3-3b70-4949-be0b-b76629ff487a ready: true, restart count 0 Feb 12 21:09:44.490: INFO: pod1 from services-2389 started at 2020-02-12 21:09:27 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.490: INFO: Container pause ready: true, restart count 0 Feb 12 21:09:44.490: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 12 21:09:44.509: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container kube-apiserver ready: true, restart count 1 Feb 12 21:09:44.509: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container etcd ready: true, restart count 1 Feb 12 21:09:44.509: INFO: pod2 from services-2389 started at 2020-02-12 21:09:34 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container pause ready: true, restart count 0 Feb 12 21:09:44.509: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container coredns ready: true, restart count 0 Feb 12 21:09:44.509: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container coredns ready: true, restart count 0 Feb 12 21:09:44.509: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container kube-controller-manager ready: true, restart count 6 Feb 12 21:09:44.509: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container kube-proxy ready: true, restart count 0 Feb 12 21:09:44.509: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 12 21:09:44.509: INFO: Container weave ready: true, restart count 0 Feb 12 21:09:44.509: INFO: Container weave-npc ready: true, restart count 0 Feb 12 21:09:44.509: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 12 21:09:44.509: INFO: Container kube-scheduler ready: true, restart count 9 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c133b435-9ab9-4d02-bb39-02bd433156cc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c133b435-9ab9-4d02-bb39-02bd433156cc off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-c133b435-9ab9-4d02-bb39-02bd433156cc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Feb 12 21:10:04.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5846" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 • [SLOW TEST:20.450 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":277,"completed":132,"skipped":2088,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Feb 12 21:10:04.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Feb 12 21:10:04.858: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 8.915947ms)
Feb 12 21:10:04.863: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.127267ms)
Feb 12 21:10:04.867: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.451093ms)
Feb 12 21:10:04.872: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.245847ms)
Feb 12 21:10:04.876: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.995339ms)
Feb 12 21:10:04.880: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.110535ms)
Feb 12 21:10:04.885: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.522072ms)
Feb 12 21:10:04.919: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 33.123018ms)
Feb 12 21:10:04.924: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.332443ms)
Feb 12 21:10:04.930: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.021142ms)
Feb 12 21:10:04.934: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.204633ms)
Feb 12 21:10:04.941: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.14268ms)
Feb 12 21:10:04.947: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.324328ms)
Feb 12 21:10:04.952: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.052179ms)
Feb 12 21:10:04.959: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.692716ms)
Feb 12 21:10:04.962: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.655415ms)
Feb 12 21:10:04.965: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.079089ms)
Feb 12 21:10:04.969: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.193465ms)
Feb 12 21:10:04.972: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.669334ms)
Feb 12 21:10:04.976: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.323185ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:10:04.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4792" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":277,"completed":133,"skipped":2091,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:10:04.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 12 21:10:05.075: INFO: Waiting up to 5m0s for pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869" in namespace "emptydir-1934" to be "Succeeded or Failed"
Feb 12 21:10:05.080: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869": Phase="Pending", Reason="", readiness=false. Elapsed: 5.40856ms
Feb 12 21:10:07.090: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015294954s
Feb 12 21:10:09.096: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021102222s
Feb 12 21:10:11.100: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024924975s
Feb 12 21:10:13.110: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035685088s
STEP: Saw pod success
Feb 12 21:10:13.110: INFO: Pod "pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869" satisfied condition "Succeeded or Failed"
Feb 12 21:10:13.113: INFO: Trying to get logs from node jerma-node pod pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869 container test-container: 
STEP: delete the pod
Feb 12 21:10:13.199: INFO: Waiting for pod pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869 to disappear
Feb 12 21:10:13.203: INFO: Pod pod-e6993c8c-3e70-4bb4-9fcf-8cd9de2db869 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:10:13.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1934" for this suite.

• [SLOW TEST:8.228 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":134,"skipped":2127,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:10:13.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:10:13.329: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 12 21:10:18.347: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 21:10:22.372: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 12 21:10:24.377: INFO: Creating deployment "test-rollover-deployment"
Feb 12 21:10:24.398: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 12 21:10:26.408: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 12 21:10:26.418: INFO: Ensure that both replica sets have 1 created replica
Feb 12 21:10:26.426: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 12 21:10:26.435: INFO: Updating deployment test-rollover-deployment
Feb 12 21:10:26.435: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 12 21:10:30.456: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 12 21:10:30.470: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 12 21:10:30.477: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:30.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138629, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:32.504: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:32.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138629, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:34.488: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:34.488: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138629, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:36.488: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:36.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138629, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:38.491: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:38.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138636, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:40.491: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:40.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138636, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:42.489: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:42.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138636, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:44.489: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:44.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138636, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:46.491: INFO: all replica sets need to contain the pod-template-hash label
Feb 12 21:10:46.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138636, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138624, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:10:48.490: INFO: 
Feb 12 21:10:48.490: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 12 21:10:48.507: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2845 /apis/apps/v1/namespaces/deployment-2845/deployments/test-rollover-deployment 7ef5f3be-5beb-4379-a388-a95edae68476 8020603 2 2020-02-12 21:10:24 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043b6878  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-12 21:10:24 +0000 UTC,LastTransitionTime:2020-02-12 21:10:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-12 21:10:46 +0000 UTC,LastTransitionTime:2020-02-12 21:10:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 21:10:48.510: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-2845 /apis/apps/v1/namespaces/deployment-2845/replicasets/test-rollover-deployment-574d6dfbff c8227a34-495f-48b3-b2b6-09545db9c45f 8020593 2 2020-02-12 21:10:26 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 7ef5f3be-5beb-4379-a388-a95edae68476 0xc0043b6cd7 0xc0043b6cd8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043b6d48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 12 21:10:48.510: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 12 21:10:48.511: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2845 /apis/apps/v1/namespaces/deployment-2845/replicasets/test-rollover-controller 6f22613a-3e1c-49c0-8516-9c4fe163f78f 8020602 2 2020-02-12 21:10:13 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 7ef5f3be-5beb-4379-a388-a95edae68476 0xc0043b6bef 0xc0043b6c00}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0043b6c68  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 21:10:48.511: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2845 /apis/apps/v1/namespaces/deployment-2845/replicasets/test-rollover-deployment-f6c94f66c ccd469f1-998f-46ea-b54d-6434e6161e2c 8020546 2 2020-02-12 21:10:24 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 7ef5f3be-5beb-4379-a388-a95edae68476 0xc0043b6db0 0xc0043b6db1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043b6e28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 21:10:48.519: INFO: Pod "test-rollover-deployment-574d6dfbff-nmtfs" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-nmtfs test-rollover-deployment-574d6dfbff- deployment-2845 /api/v1/namespaces/deployment-2845/pods/test-rollover-deployment-574d6dfbff-nmtfs 0ec3c2ca-d116-4c16-8906-5d87cdead936 8020567 0 2020-02-12 21:10:29 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff c8227a34-495f-48b3-b2b6-09545db9c45f 0xc0043b7347 0xc0043b7348}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rdhcl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rdhcl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rdhcl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:10:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:10:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:10:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:10:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-12 21:10:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 21:10:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://f8fe3729a3614b9e198976ab4104eaf40f233eab57147956ddef44514313b7f5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:10:48.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2845" for this suite.

• [SLOW TEST:35.323 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":277,"completed":135,"skipped":2181,"failed":0}
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:10:48.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-850e8b54-f365-49fd-943c-6d0cda09f48d
STEP: Creating a pod to test consume secrets
Feb 12 21:10:48.953: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3" in namespace "projected-8694" to be "Succeeded or Failed"
Feb 12 21:10:48.964: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.615165ms
Feb 12 21:10:50.970: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017375519s
Feb 12 21:10:52.976: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023216961s
Feb 12 21:10:54.989: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036295354s
Feb 12 21:10:57.372: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.418890563s
Feb 12 21:11:00.708: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.754779938s
Feb 12 21:11:02.718: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.765514043s
Feb 12 21:11:04.725: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.77177225s
STEP: Saw pod success
Feb 12 21:11:04.725: INFO: Pod "pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3" satisfied condition "Succeeded or Failed"
Feb 12 21:11:04.728: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 21:11:04.761: INFO: Waiting for pod pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3 to disappear
Feb 12 21:11:04.765: INFO: Pod pod-projected-secrets-85232e20-b3b6-4677-b35d-1f92e2fe59b3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:11:04.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8694" for this suite.

• [SLOW TEST:16.242 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":136,"skipped":2181,"failed":0}
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:11:04.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:11:04.903: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:11:06.909: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:11:08.909: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:11:10.910: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:11:12.910: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:14.910: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:16.913: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:18.913: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:20.913: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:22.912: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:24.912: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = false)
Feb 12 21:11:26.910: INFO: The status of Pod test-webserver-add47047-6241-45a0-8e36-03dce13d7b91 is Running (Ready = true)
Feb 12 21:11:26.914: INFO: Container started at 2020-02-12 21:11:10 +0000 UTC, pod became ready at 2020-02-12 21:11:26 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:11:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-151" for this suite.

• [SLOW TEST:22.160 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":277,"completed":137,"skipped":2181,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:11:26.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Feb 12 21:11:27.126: INFO: Waiting up to 5m0s for pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc" in namespace "containers-6115" to be "Succeeded or Failed"
Feb 12 21:11:27.147: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.979193ms
Feb 12 21:11:29.154: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028360282s
Feb 12 21:11:31.164: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038213094s
Feb 12 21:11:33.173: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046702959s
Feb 12 21:11:35.178: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051737424s
STEP: Saw pod success
Feb 12 21:11:35.178: INFO: Pod "client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc" satisfied condition "Succeeded or Failed"
Feb 12 21:11:35.180: INFO: Trying to get logs from node jerma-node pod client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc container test-container: 
STEP: delete the pod
Feb 12 21:11:35.210: INFO: Waiting for pod client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc to disappear
Feb 12 21:11:35.234: INFO: Pod client-containers-10224b1e-e372-484b-a28c-b26e5b4cf7dc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:11:35.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6115" for this suite.

• [SLOW TEST:8.307 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":277,"completed":138,"skipped":2185,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:11:35.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-c1aa0899-70fd-4300-bdc2-6ffd905e583a
STEP: Creating a pod to test consume secrets
Feb 12 21:11:35.655: INFO: Waiting up to 5m0s for pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab" in namespace "secrets-6868" to be "Succeeded or Failed"
Feb 12 21:11:35.676: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab": Phase="Pending", Reason="", readiness=false. Elapsed: 20.31396ms
Feb 12 21:11:37.682: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027110888s
Feb 12 21:11:39.691: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036137584s
Feb 12 21:11:41.697: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042018182s
Feb 12 21:11:43.706: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.050537709s
STEP: Saw pod success
Feb 12 21:11:43.706: INFO: Pod "pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab" satisfied condition "Succeeded or Failed"
Feb 12 21:11:43.713: INFO: Trying to get logs from node jerma-node pod pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab container secret-volume-test: 
STEP: delete the pod
Feb 12 21:11:43.754: INFO: Waiting for pod pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab to disappear
Feb 12 21:11:43.761: INFO: Pod pod-secrets-b9a08211-75d0-4292-8d26-9ea0288ddbab no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:11:43.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6868" for this suite.
STEP: Destroying namespace "secret-namespace-1547" for this suite.

• [SLOW TEST:8.542 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":277,"completed":139,"skipped":2197,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:11:43.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:11:44.689: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 21:11:46.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138705, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:11:48.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138705, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:11:50.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138705, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138704, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:11:53.778: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:11:53.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8992" for this suite.
STEP: Destroying namespace "webhook-8992-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.311 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":277,"completed":140,"skipped":2239,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:11:54.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Feb 12 21:11:54.180: INFO: Waiting up to 5m0s for pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be" in namespace "downward-api-893" to be "Succeeded or Failed"
Feb 12 21:11:54.188: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Pending", Reason="", readiness=false. Elapsed: 7.279487ms
Feb 12 21:11:56.194: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014136964s
Feb 12 21:11:58.200: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020197349s
Feb 12 21:12:00.209: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028239739s
Feb 12 21:12:02.213: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032296048s
Feb 12 21:12:04.219: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.038684265s
STEP: Saw pod success
Feb 12 21:12:04.219: INFO: Pod "downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be" satisfied condition "Succeeded or Failed"
Feb 12 21:12:04.226: INFO: Trying to get logs from node jerma-node pod downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be container dapi-container: 
STEP: delete the pod
Feb 12 21:12:04.283: INFO: Waiting for pod downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be to disappear
Feb 12 21:12:04.287: INFO: Pod downward-api-299d0e62-8ac9-4e86-b6ff-a7896a79d5be no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:12:04.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-893" for this suite.

• [SLOW TEST:10.252 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":277,"completed":141,"skipped":2257,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:12:04.355: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7496 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7496;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7496 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7496;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7496.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7496.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7496.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7496.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7496.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7496.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7496.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.253_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7496 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7496;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7496 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7496;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7496.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7496.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7496.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7496.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7496.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7496.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7496.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7496.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7496.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 253.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.253_udp@PTR;check="$$(dig +tcp +noall +answer +search 253.178.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.178.253_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 21:12:14.582: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.590: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.600: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.607: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.610: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.617: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.632: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.669: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.679: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.682: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.692: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.696: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.705: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:14.739: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:19.752: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.759: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.766: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.786: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.791: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.799: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.805: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.957: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.968: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.984: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:19.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:20.003: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:20.011: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:20.020: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:20.027: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:20.066: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:24.749: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.756: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.767: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.772: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.778: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.783: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.788: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.826: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.830: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.836: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.842: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.848: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.852: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:24.895: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:29.748: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.754: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.759: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.763: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.786: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.864: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.873: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.884: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.900: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.908: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.913: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.918: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:29.940: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:34.746: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.750: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.755: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.758: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.761: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.764: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.774: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.808: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.810: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.813: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.816: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.818: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.822: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.824: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.828: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:34.864: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:39.749: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.757: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.762: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.769: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.787: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.793: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.798: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.841: INFO: Unable to read jessie_udp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.848: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.856: INFO: Unable to read jessie_udp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496 from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.875: INFO: Unable to read jessie_udp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.884: INFO: Unable to read jessie_tcp@dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.892: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc from pod dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747: the server could not find the requested resource (get pods dns-test-004ef031-6026-4f2c-83d0-3f4b31367747)
Feb 12 21:12:39.933: INFO: Lookups using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-7496 wheezy_tcp@dns-test-service.dns-7496 wheezy_udp@dns-test-service.dns-7496.svc wheezy_tcp@dns-test-service.dns-7496.svc wheezy_udp@_http._tcp.dns-test-service.dns-7496.svc wheezy_tcp@_http._tcp.dns-test-service.dns-7496.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-7496 jessie_tcp@dns-test-service.dns-7496 jessie_udp@dns-test-service.dns-7496.svc jessie_tcp@dns-test-service.dns-7496.svc jessie_udp@_http._tcp.dns-test-service.dns-7496.svc jessie_tcp@_http._tcp.dns-test-service.dns-7496.svc]

Feb 12 21:12:45.082: INFO: DNS probes using dns-7496/dns-test-004ef031-6026-4f2c-83d0-3f4b31367747 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:12:45.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7496" for this suite.

• [SLOW TEST:41.019 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":277,"completed":142,"skipped":2289,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:12:45.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:12:52.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2229" for this suite.

• [SLOW TEST:7.089 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":277,"completed":143,"skipped":2311,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:12:52.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-832.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-832.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-832.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-832.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-832.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 21:13:02.688: INFO: DNS probes using dns-832/dns-test-d8465c63-2483-4123-b645-657e94318219 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:13:02.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-832" for this suite.

• [SLOW TEST:10.279 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":277,"completed":144,"skipped":2316,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:13:02.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 12 21:13:02.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 12 21:13:13.841: INFO: >>> kubeConfig: /root/.kube/config
Feb 12 21:13:15.718: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:13:27.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-275" for this suite.

• [SLOW TEST:25.099 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":277,"completed":145,"skipped":2317,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:13:27.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-9965
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9965
STEP: Deleting pre-stop pod
Feb 12 21:13:47.031: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:13:47.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9965" for this suite.

• [SLOW TEST:19.254 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":277,"completed":146,"skipped":2349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:13:47.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:13:48.053: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 21:13:50.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138827, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:13:52.080: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138827, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:13:54.082: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138827, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:13:56.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138828, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138827, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:13:59.111: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
Feb 12 21:13:59.166: INFO: Waiting for webhook configuration to be ready...
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:13:59.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1209" for this suite.
STEP: Destroying namespace "webhook-1209-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.606 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":277,"completed":147,"skipped":2379,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:13:59.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Feb 12 21:14:00.071: INFO: Waiting up to 5m0s for pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e" in namespace "downward-api-3326" to be "Succeeded or Failed"
Feb 12 21:14:00.144: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 72.038933ms
Feb 12 21:14:02.150: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078089412s
Feb 12 21:14:04.154: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082069184s
Feb 12 21:14:06.162: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090646347s
Feb 12 21:14:08.171: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09945868s
STEP: Saw pod success
Feb 12 21:14:08.171: INFO: Pod "downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e" satisfied condition "Succeeded or Failed"
Feb 12 21:14:08.175: INFO: Trying to get logs from node jerma-node pod downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e container dapi-container: 
STEP: delete the pod
Feb 12 21:14:08.884: INFO: Waiting for pod downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e to disappear
Feb 12 21:14:08.894: INFO: Pod downward-api-4874f892-5d3a-4c78-966c-6fecce41c38e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:14:08.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3326" for this suite.

• [SLOW TEST:9.225 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":277,"completed":148,"skipped":2393,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:14:08.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:14:09.569: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 21:14:11.596: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:14:13.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:14:15.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717138849, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:14:18.679: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:14:28.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8875" for this suite.
STEP: Destroying namespace "webhook-8875-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:20.169 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":277,"completed":149,"skipped":2403,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:14:29.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-gxx8b in namespace proxy-113
I0212 21:14:29.297822       9 runners.go:189] Created replication controller with name: proxy-service-gxx8b, namespace: proxy-113, replica count: 1
I0212 21:14:30.348789       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:31.349186       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:32.349767       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:33.350120       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:34.350503       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:35.350902       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:36.351307       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:14:37.351672       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:38.352050       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:39.352477       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:40.353083       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:41.353604       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:42.353945       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:43.354258       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:44.354672       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:45.355240       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:46.355682       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0212 21:14:47.356086       9 runners.go:189] proxy-service-gxx8b Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 12 21:14:47.360: INFO: setup took 18.116348821s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 12 21:14:47.385: INFO: (0) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 24.763921ms)
Feb 12 21:14:47.387: INFO: (0) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 26.379782ms)
Feb 12 21:14:47.388: INFO: (0) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 27.822434ms)
Feb 12 21:14:47.388: INFO: (0) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 27.626274ms)
Feb 12 21:14:47.388: INFO: (0) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testt... (200; 28.137065ms)
Feb 12 21:14:47.389: INFO: (0) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 28.746135ms)
Feb 12 21:14:47.390: INFO: (0) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 28.990042ms)
Feb 12 21:14:47.390: INFO: (0) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 29.025098ms)
Feb 12 21:14:47.391: INFO: (0) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 30.214781ms)
Feb 12 21:14:47.395: INFO: (0) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: testtest (200; 26.534376ms)
Feb 12 21:14:47.429: INFO: (1) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 26.723253ms)
Feb 12 21:14:47.429: INFO: (1) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 27.110818ms)
Feb 12 21:14:47.429: INFO: (1) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 27.949024ms)
Feb 12 21:14:47.432: INFO: (1) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 30.077481ms)
Feb 12 21:14:47.432: INFO: (1) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 30.77653ms)
Feb 12 21:14:47.433: INFO: (1) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 30.608523ms)
Feb 12 21:14:47.435: INFO: (1) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 32.36155ms)
Feb 12 21:14:47.435: INFO: (1) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 32.768644ms)
Feb 12 21:14:47.435: INFO: (1) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 32.648753ms)
Feb 12 21:14:47.435: INFO: (1) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 33.72964ms)
Feb 12 21:14:47.436: INFO: (1) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 33.614561ms)
Feb 12 21:14:47.450: INFO: (2) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 13.651367ms)
Feb 12 21:14:47.450: INFO: (2) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 13.806598ms)
Feb 12 21:14:47.451: INFO: (2) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testt... (200; 22.783992ms)
Feb 12 21:14:47.459: INFO: (2) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 23.190265ms)
Feb 12 21:14:47.459: INFO: (2) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 23.318241ms)
Feb 12 21:14:47.459: INFO: (2) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 23.28331ms)
Feb 12 21:14:47.459: INFO: (2) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 23.326998ms)
Feb 12 21:14:47.460: INFO: (2) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 23.758764ms)
Feb 12 21:14:47.469: INFO: (3) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 9.562335ms)
Feb 12 21:14:47.470: INFO: (3) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 12.403399ms)
Feb 12 21:14:47.473: INFO: (3) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 12.860857ms)
Feb 12 21:14:47.473: INFO: (3) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 12.866909ms)
Feb 12 21:14:47.473: INFO: (3) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 13.116321ms)
Feb 12 21:14:47.474: INFO: (3) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 13.903141ms)
Feb 12 21:14:47.474: INFO: (3) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 14.363458ms)
Feb 12 21:14:47.475: INFO: (3) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 14.615762ms)
Feb 12 21:14:47.475: INFO: (3) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 14.883835ms)
Feb 12 21:14:47.479: INFO: (3) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 18.910628ms)
Feb 12 21:14:47.479: INFO: (3) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 18.879172ms)
Feb 12 21:14:47.479: INFO: (3) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 19.054743ms)
Feb 12 21:14:47.479: INFO: (3) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 19.105804ms)
Feb 12 21:14:47.490: INFO: (4) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 10.980535ms)
Feb 12 21:14:47.491: INFO: (4) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: t... (200; 15.350354ms)
Feb 12 21:14:47.495: INFO: (4) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 15.55772ms)
Feb 12 21:14:47.495: INFO: (4) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 16.641158ms)
Feb 12 21:14:47.496: INFO: (4) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 16.662023ms)
Feb 12 21:14:47.497: INFO: (4) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 17.848826ms)
Feb 12 21:14:47.503: INFO: (5) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 5.554684ms)
Feb 12 21:14:47.503: INFO: (5) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 5.826823ms)
Feb 12 21:14:47.507: INFO: (5) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: t... (200; 12.698699ms)
Feb 12 21:14:47.510: INFO: (5) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 12.816783ms)
Feb 12 21:14:47.511: INFO: (5) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 13.023952ms)
Feb 12 21:14:47.511: INFO: (5) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 13.174287ms)
Feb 12 21:14:47.511: INFO: (5) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 6.989503ms)
Feb 12 21:14:47.520: INFO: (6) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 7.421094ms)
Feb 12 21:14:47.520: INFO: (6) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 7.172885ms)
Feb 12 21:14:47.521: INFO: (6) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 7.841328ms)
Feb 12 21:14:47.521: INFO: (6) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtesttest (200; 9.757179ms)
Feb 12 21:14:47.537: INFO: (7) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 10.498811ms)
Feb 12 21:14:47.537: INFO: (7) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 10.899653ms)
Feb 12 21:14:47.537: INFO: (7) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 11.020441ms)
Feb 12 21:14:47.538: INFO: (7) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 11.388419ms)
Feb 12 21:14:47.538: INFO: (7) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 11.746503ms)
Feb 12 21:14:47.538: INFO: (7) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 12.148649ms)
Feb 12 21:14:47.539: INFO: (7) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 12.550511ms)
Feb 12 21:14:47.540: INFO: (7) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 13.285574ms)
Feb 12 21:14:47.540: INFO: (7) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 13.313303ms)
Feb 12 21:14:47.543: INFO: (8) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 3.446038ms)
Feb 12 21:14:47.549: INFO: (8) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 8.574794ms)
Feb 12 21:14:47.549: INFO: (8) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 8.932595ms)
Feb 12 21:14:47.549: INFO: (8) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 9.240481ms)
Feb 12 21:14:47.549: INFO: (8) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 9.381538ms)
Feb 12 21:14:47.549: INFO: (8) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: testt... (200; 11.558119ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 11.637543ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 12.010013ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 12.4464ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 12.429874ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 12.455349ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 12.292616ms)
Feb 12 21:14:47.552: INFO: (8) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 12.65246ms)
Feb 12 21:14:47.561: INFO: (9) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 8.730199ms)
Feb 12 21:14:47.562: INFO: (9) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: testt... (200; 13.215159ms)
Feb 12 21:14:47.566: INFO: (9) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 13.32456ms)
Feb 12 21:14:47.566: INFO: (9) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 13.250015ms)
Feb 12 21:14:47.566: INFO: (9) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 13.366094ms)
Feb 12 21:14:47.566: INFO: (9) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 13.223514ms)
Feb 12 21:14:47.566: INFO: (9) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 13.700318ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 17.05869ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 17.356644ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 17.43556ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 17.56813ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 17.460188ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 17.818805ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 17.793428ms)
Feb 12 21:14:47.584: INFO: (10) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: testtest (200; 18.786434ms)
Feb 12 21:14:47.593: INFO: (11) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testt... (200; 11.387333ms)
Feb 12 21:14:47.599: INFO: (11) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 13.288937ms)
Feb 12 21:14:47.599: INFO: (11) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 13.369484ms)
Feb 12 21:14:47.599: INFO: (11) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 13.39261ms)
Feb 12 21:14:47.600: INFO: (11) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 14.082184ms)
Feb 12 21:14:47.600: INFO: (11) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 14.25877ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 15.104312ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 15.233269ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 15.476093ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 15.460723ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 15.659592ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 15.696443ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 16.011146ms)
Feb 12 21:14:47.601: INFO: (11) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: test (200; 6.679775ms)
Feb 12 21:14:47.609: INFO: (12) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: t... (200; 10.117894ms)
Feb 12 21:14:47.612: INFO: (12) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 10.387664ms)
Feb 12 21:14:47.613: INFO: (12) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtestt... (200; 6.104129ms)
Feb 12 21:14:47.620: INFO: (13) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 6.770533ms)
Feb 12 21:14:47.621: INFO: (13) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 6.747129ms)
Feb 12 21:14:47.621: INFO: (13) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 6.982954ms)
Feb 12 21:14:47.621: INFO: (13) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 7.11052ms)
Feb 12 21:14:47.622: INFO: (13) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 8.359061ms)
Feb 12 21:14:47.622: INFO: (13) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 8.545052ms)
Feb 12 21:14:47.623: INFO: (13) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: test (200; 9.218707ms)
Feb 12 21:14:47.623: INFO: (13) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 9.566811ms)
Feb 12 21:14:47.623: INFO: (13) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 9.661995ms)
Feb 12 21:14:47.625: INFO: (13) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 10.65342ms)
Feb 12 21:14:47.625: INFO: (13) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 11.379383ms)
Feb 12 21:14:47.626: INFO: (13) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname1/proxy/: tls baz (200; 12.479024ms)
Feb 12 21:14:47.631: INFO: (14) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 4.631746ms)
Feb 12 21:14:47.632: INFO: (14) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 5.319111ms)
Feb 12 21:14:47.635: INFO: (14) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 9.042594ms)
Feb 12 21:14:47.635: INFO: (14) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 9.059737ms)
Feb 12 21:14:47.636: INFO: (14) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 11.441259ms)
Feb 12 21:14:47.639: INFO: (14) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 12.058364ms)
Feb 12 21:14:47.639: INFO: (14) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 12.140686ms)
Feb 12 21:14:47.639: INFO: (14) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 12.158026ms)
Feb 12 21:14:47.645: INFO: (15) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 6.023652ms)
Feb 12 21:14:47.645: INFO: (15) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 6.597949ms)
Feb 12 21:14:47.648: INFO: (15) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 9.453926ms)
Feb 12 21:14:47.649: INFO: (15) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 9.842204ms)
Feb 12 21:14:47.649: INFO: (15) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testt... (200; 10.955338ms)
Feb 12 21:14:47.650: INFO: (15) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:462/proxy/: tls qux (200; 11.022309ms)
Feb 12 21:14:47.650: INFO: (15) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 11.035699ms)
Feb 12 21:14:47.650: INFO: (15) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 11.635987ms)
Feb 12 21:14:47.651: INFO: (15) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname1/proxy/: foo (200; 11.554081ms)
Feb 12 21:14:47.651: INFO: (15) /api/v1/namespaces/proxy-113/services/http:proxy-service-gxx8b:portname2/proxy/: bar (200; 11.835791ms)
Feb 12 21:14:47.651: INFO: (15) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: testt... (200; 7.538893ms)
Feb 12 21:14:47.659: INFO: (16) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 7.993218ms)
Feb 12 21:14:47.663: INFO: (16) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname2/proxy/: bar (200; 12.02704ms)
Feb 12 21:14:47.664: INFO: (16) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 12.584193ms)
Feb 12 21:14:47.664: INFO: (16) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:443/proxy/: t... (200; 3.808879ms)
Feb 12 21:14:47.672: INFO: (17) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 5.342875ms)
Feb 12 21:14:47.673: INFO: (17) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 6.007912ms)
Feb 12 21:14:47.674: INFO: (17) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 8.008753ms)
Feb 12 21:14:47.675: INFO: (17) /api/v1/namespaces/proxy-113/services/proxy-service-gxx8b:portname1/proxy/: foo (200; 9.122772ms)
Feb 12 21:14:47.678: INFO: (17) /api/v1/namespaces/proxy-113/services/https:proxy-service-gxx8b:tlsportname2/proxy/: tls qux (200; 11.277886ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 8.777299ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j/proxy/: test (200; 9.151407ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 9.457302ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 9.768263ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:1080/proxy/: t... (200; 9.555311ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 9.720219ms)
Feb 12 21:14:47.688: INFO: (18) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 9.859183ms)
Feb 12 21:14:47.689: INFO: (18) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testt... (200; 12.05814ms)
Feb 12 21:14:47.705: INFO: (19) /api/v1/namespaces/proxy-113/pods/https:proxy-service-gxx8b-95s7j:460/proxy/: tls baz (200; 12.218412ms)
Feb 12 21:14:47.705: INFO: (19) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:1080/proxy/: testtest (200; 14.087762ms)
Feb 12 21:14:47.707: INFO: (19) /api/v1/namespaces/proxy-113/pods/proxy-service-gxx8b-95s7j:162/proxy/: bar (200; 14.151734ms)
Feb 12 21:14:47.707: INFO: (19) /api/v1/namespaces/proxy-113/pods/http:proxy-service-gxx8b-95s7j:160/proxy/: foo (200; 14.021643ms)
STEP: deleting ReplicationController proxy-service-gxx8b in namespace proxy-113, will wait for the garbage collector to delete the pods
Feb 12 21:14:47.765: INFO: Deleting ReplicationController proxy-service-gxx8b took: 6.503793ms
Feb 12 21:14:48.065: INFO: Terminating ReplicationController proxy-service-gxx8b pods took: 300.380595ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:14:52.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-113" for this suite.

• [SLOW TEST:23.714 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":277,"completed":150,"skipped":2435,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:14:52.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:14:52.956: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec" in namespace "downward-api-1879" to be "Succeeded or Failed"
Feb 12 21:14:52.965: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.6343ms
Feb 12 21:14:54.973: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015941755s
Feb 12 21:14:56.981: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024633402s
Feb 12 21:14:58.988: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0314708s
Feb 12 21:15:00.994: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037108257s
STEP: Saw pod success
Feb 12 21:15:00.994: INFO: Pod "downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec" satisfied condition "Succeeded or Failed"
Feb 12 21:15:00.997: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec container client-container: 
STEP: delete the pod
Feb 12 21:15:01.536: INFO: Waiting for pod downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec to disappear
Feb 12 21:15:01.559: INFO: Pod downwardapi-volume-4c54a9bc-338b-4283-a1b6-67ffbf3cbeec no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:15:01.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1879" for this suite.

• [SLOW TEST:8.773 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":151,"skipped":2441,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:15:01.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 21:15:11.935: INFO: DNS probes using dns-test-eb302cbb-90ac-4661-901d-8d0befd57466 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 21:15:24.214: INFO: File wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:24.218: INFO: File jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:24.218: INFO: Lookups using dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 failed for: [wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local]

Feb 12 21:15:29.229: INFO: File wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:29.234: INFO: File jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:29.234: INFO: Lookups using dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 failed for: [wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local]

Feb 12 21:15:34.225: INFO: File wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:34.229: INFO: File jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:34.229: INFO: Lookups using dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 failed for: [wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local]

Feb 12 21:15:39.225: INFO: File wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:39.229: INFO: File jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local from pod  dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 12 21:15:39.229: INFO: Lookups using dns-4614/dns-test-5481390d-d358-410d-85e5-204d7e419594 failed for: [wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local]

Feb 12 21:15:44.231: INFO: DNS probes using dns-test-5481390d-d358-410d-85e5-204d7e419594 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4614.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4614.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 12 21:16:00.569: INFO: DNS probes using dns-test-602488f0-4b56-40e7-8995-973632c0a506 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:16:00.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4614" for this suite.

• [SLOW TEST:59.146 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":277,"completed":152,"skipped":2442,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:16:00.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:16:00.849: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f" in namespace "projected-9302" to be "Succeeded or Failed"
Feb 12 21:16:00.868: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.700132ms
Feb 12 21:16:02.877: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027299843s
Feb 12 21:16:04.888: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03808797s
Feb 12 21:16:06.901: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051295158s
Feb 12 21:16:09.001: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.151079768s
Feb 12 21:16:11.005: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.155779852s
STEP: Saw pod success
Feb 12 21:16:11.005: INFO: Pod "downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f" satisfied condition "Succeeded or Failed"
Feb 12 21:16:11.009: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f container client-container: 
STEP: delete the pod
Feb 12 21:16:11.069: INFO: Waiting for pod downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f to disappear
Feb 12 21:16:11.073: INFO: Pod downwardapi-volume-fbe177db-095c-4ced-933d-3d3a6c54462f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:16:11.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9302" for this suite.

• [SLOW TEST:10.346 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":153,"skipped":2451,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:16:11.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Feb 12 21:16:11.202: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:16:22.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5680" for this suite.

• [SLOW TEST:11.239 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":277,"completed":154,"skipped":2467,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:16:22.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-c2c9b8de-6b4e-4b0d-a6a1-4642f68213cc
STEP: Creating secret with name s-test-opt-upd-66474468-4b5e-468d-8355-3e7c0524294e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-c2c9b8de-6b4e-4b0d-a6a1-4642f68213cc
STEP: Updating secret s-test-opt-upd-66474468-4b5e-468d-8355-3e7c0524294e
STEP: Creating secret with name s-test-opt-create-8de30bdb-9170-42fb-a4c9-3275761ae272
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:16:34.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6474" for this suite.

• [SLOW TEST:12.663 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":155,"skipped":2552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:16:34.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-a6d8db52-398a-481e-897f-6c1cb87c1fd3 in namespace container-probe-2167
Feb 12 21:16:45.114: INFO: Started pod busybox-a6d8db52-398a-481e-897f-6c1cb87c1fd3 in namespace container-probe-2167
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 21:16:45.117: INFO: Initial restart count of pod busybox-a6d8db52-398a-481e-897f-6c1cb87c1fd3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:20:46.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2167" for this suite.

• [SLOW TEST:251.766 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":277,"completed":156,"skipped":2639,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:20:46.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:20:46.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b" in namespace "projected-8976" to be "Succeeded or Failed"
Feb 12 21:20:46.826: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.034488ms
Feb 12 21:20:48.833: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009665182s
Feb 12 21:20:50.837: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013755159s
Feb 12 21:20:52.843: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019683957s
Feb 12 21:20:54.855: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031546122s
STEP: Saw pod success
Feb 12 21:20:54.855: INFO: Pod "downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b" satisfied condition "Succeeded or Failed"
Feb 12 21:20:54.865: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b container client-container: 
STEP: delete the pod
Feb 12 21:20:55.066: INFO: Waiting for pod downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b to disappear
Feb 12 21:20:55.081: INFO: Pod downwardapi-volume-875b7b4a-0ff5-40a8-86bc-2d47e1c0070b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:20:55.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8976" for this suite.

• [SLOW TEST:8.336 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":277,"completed":157,"skipped":2661,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:20:55.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-dcb62c2c-28b9-49e8-a296-ff92758caac8
STEP: Creating a pod to test consume secrets
Feb 12 21:20:55.242: INFO: Waiting up to 5m0s for pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f" in namespace "secrets-825" to be "Succeeded or Failed"
Feb 12 21:20:55.247: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269005ms
Feb 12 21:20:57.253: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01020004s
Feb 12 21:20:59.263: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020333083s
Feb 12 21:21:01.268: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02597127s
Feb 12 21:21:03.275: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.032876947s
STEP: Saw pod success
Feb 12 21:21:03.275: INFO: Pod "pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f" satisfied condition "Succeeded or Failed"
Feb 12 21:21:03.279: INFO: Trying to get logs from node jerma-node pod pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f container secret-volume-test: 
STEP: delete the pod
Feb 12 21:21:03.335: INFO: Waiting for pod pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f to disappear
Feb 12 21:21:03.363: INFO: Pod pod-secrets-b1a7dd72-0c0a-42e1-bd29-a78a9256741f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:21:03.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-825" for this suite.

• [SLOW TEST:8.319 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":158,"skipped":2665,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:21:03.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7817
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7817
STEP: creating replication controller externalsvc in namespace services-7817
I0212 21:21:03.944898       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7817, replica count: 2
I0212 21:21:06.995756       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:21:09.996274       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:21:12.996652       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 12 21:21:13.062: INFO: Creating new exec pod
Feb 12 21:21:19.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7817 execpodwzvkl -- /bin/sh -x -c nslookup clusterip-service'
Feb 12 21:21:21.630: INFO: stderr: "I0212 21:21:21.425798    2632 log.go:172] (0xc0000f5760) (0xc000697f40) Create stream\nI0212 21:21:21.425931    2632 log.go:172] (0xc0000f5760) (0xc000697f40) Stream added, broadcasting: 1\nI0212 21:21:21.431251    2632 log.go:172] (0xc0000f5760) Reply frame received for 1\nI0212 21:21:21.431336    2632 log.go:172] (0xc0000f5760) (0xc0005c6820) Create stream\nI0212 21:21:21.431349    2632 log.go:172] (0xc0000f5760) (0xc0005c6820) Stream added, broadcasting: 3\nI0212 21:21:21.432634    2632 log.go:172] (0xc0000f5760) Reply frame received for 3\nI0212 21:21:21.432682    2632 log.go:172] (0xc0000f5760) (0xc000613680) Create stream\nI0212 21:21:21.432699    2632 log.go:172] (0xc0000f5760) (0xc000613680) Stream added, broadcasting: 5\nI0212 21:21:21.434180    2632 log.go:172] (0xc0000f5760) Reply frame received for 5\nI0212 21:21:21.521847    2632 log.go:172] (0xc0000f5760) Data frame received for 5\nI0212 21:21:21.521887    2632 log.go:172] (0xc000613680) (5) Data frame handling\nI0212 21:21:21.521910    2632 log.go:172] (0xc000613680) (5) Data frame sent\nI0212 21:21:21.521918    2632 log.go:172] (0xc0000f5760) Data frame received for 5\nI0212 21:21:21.521926    2632 log.go:172] (0xc000613680) (5) Data frame handling\n+ nslookup clusterip-service\nI0212 21:21:21.521944    2632 log.go:172] (0xc000613680) (5) Data frame sent\nI0212 21:21:21.538519    2632 log.go:172] (0xc0000f5760) Data frame received for 3\nI0212 21:21:21.538698    2632 log.go:172] (0xc0005c6820) (3) Data frame handling\nI0212 21:21:21.538758    2632 log.go:172] (0xc0005c6820) (3) Data frame sent\nI0212 21:21:21.540862    2632 log.go:172] (0xc0000f5760) Data frame received for 3\nI0212 21:21:21.540877    2632 log.go:172] (0xc0005c6820) (3) Data frame handling\nI0212 21:21:21.540890    2632 log.go:172] (0xc0005c6820) (3) Data frame sent\nI0212 21:21:21.617167    2632 log.go:172] (0xc0000f5760) Data frame received for 1\nI0212 21:21:21.617255    2632 log.go:172] (0xc0000f5760) (0xc0005c6820) Stream removed, broadcasting: 3\nI0212 21:21:21.617331    2632 log.go:172] (0xc000697f40) (1) Data frame handling\nI0212 21:21:21.617370    2632 log.go:172] (0xc000697f40) (1) Data frame sent\nI0212 21:21:21.617386    2632 log.go:172] (0xc0000f5760) (0xc000697f40) Stream removed, broadcasting: 1\nI0212 21:21:21.618413    2632 log.go:172] (0xc0000f5760) (0xc000613680) Stream removed, broadcasting: 5\nI0212 21:21:21.618457    2632 log.go:172] (0xc0000f5760) Go away received\nI0212 21:21:21.618908    2632 log.go:172] (0xc0000f5760) (0xc000697f40) Stream removed, broadcasting: 1\nI0212 21:21:21.619072    2632 log.go:172] (0xc0000f5760) (0xc0005c6820) Stream removed, broadcasting: 3\nI0212 21:21:21.619100    2632 log.go:172] (0xc0000f5760) (0xc000613680) Stream removed, broadcasting: 5\n"
Feb 12 21:21:21.630: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7817.svc.cluster.local\tcanonical name = externalsvc.services-7817.svc.cluster.local.\nName:\texternalsvc.services-7817.svc.cluster.local\nAddress: 10.96.27.247\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7817, will wait for the garbage collector to delete the pods
Feb 12 21:21:21.691: INFO: Deleting ReplicationController externalsvc took: 4.970271ms
Feb 12 21:21:21.992: INFO: Terminating ReplicationController externalsvc pods took: 300.340175ms
Feb 12 21:21:32.535: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:21:32.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7817" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696

• [SLOW TEST:29.171 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":277,"completed":159,"skipped":2700,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:21:32.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:21:40.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1353" for this suite.

• [SLOW TEST:8.413 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":277,"completed":160,"skipped":2709,"failed":0}
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:21:40.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 12 21:21:41.124: INFO: Created pod &Pod{ObjectMeta:{dns-2431  dns-2431 /api/v1/namespaces/dns-2431/pods/dns-2431 4d9ebdf2-b60b-47ec-81ea-626e910ddf11 8023128 0 2020-02-12 21:21:41 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-x6f6w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-x6f6w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-x6f6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 21:21:41.130: INFO: The status of Pod dns-2431 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:21:43.136: INFO: The status of Pod dns-2431 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:21:45.134: INFO: The status of Pod dns-2431 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:21:47.137: INFO: The status of Pod dns-2431 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 12 21:21:47.137: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2431 PodName:dns-2431 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 21:21:47.137: INFO: >>> kubeConfig: /root/.kube/config
I0212 21:21:47.186351       9 log.go:172] (0xc002844a50) (0xc000a39360) Create stream
I0212 21:21:47.186393       9 log.go:172] (0xc002844a50) (0xc000a39360) Stream added, broadcasting: 1
I0212 21:21:47.190218       9 log.go:172] (0xc002844a50) Reply frame received for 1
I0212 21:21:47.190262       9 log.go:172] (0xc002844a50) (0xc000bccbe0) Create stream
I0212 21:21:47.190279       9 log.go:172] (0xc002844a50) (0xc000bccbe0) Stream added, broadcasting: 3
I0212 21:21:47.192279       9 log.go:172] (0xc002844a50) Reply frame received for 3
I0212 21:21:47.192302       9 log.go:172] (0xc002844a50) (0xc000a39680) Create stream
I0212 21:21:47.192316       9 log.go:172] (0xc002844a50) (0xc000a39680) Stream added, broadcasting: 5
I0212 21:21:47.195533       9 log.go:172] (0xc002844a50) Reply frame received for 5
I0212 21:21:47.310438       9 log.go:172] (0xc002844a50) Data frame received for 3
I0212 21:21:47.310532       9 log.go:172] (0xc000bccbe0) (3) Data frame handling
I0212 21:21:47.310599       9 log.go:172] (0xc000bccbe0) (3) Data frame sent
I0212 21:21:47.385932       9 log.go:172] (0xc002844a50) Data frame received for 1
I0212 21:21:47.385992       9 log.go:172] (0xc002844a50) (0xc000bccbe0) Stream removed, broadcasting: 3
I0212 21:21:47.386024       9 log.go:172] (0xc000a39360) (1) Data frame handling
I0212 21:21:47.386049       9 log.go:172] (0xc000a39360) (1) Data frame sent
I0212 21:21:47.386067       9 log.go:172] (0xc002844a50) (0xc000a39680) Stream removed, broadcasting: 5
I0212 21:21:47.386094       9 log.go:172] (0xc002844a50) (0xc000a39360) Stream removed, broadcasting: 1
I0212 21:21:47.386115       9 log.go:172] (0xc002844a50) Go away received
I0212 21:21:47.386213       9 log.go:172] (0xc002844a50) (0xc000a39360) Stream removed, broadcasting: 1
I0212 21:21:47.386233       9 log.go:172] (0xc002844a50) (0xc000bccbe0) Stream removed, broadcasting: 3
I0212 21:21:47.386244       9 log.go:172] (0xc002844a50) (0xc000a39680) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 12 21:21:47.386: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2431 PodName:dns-2431 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 21:21:47.386: INFO: >>> kubeConfig: /root/.kube/config
I0212 21:21:47.420716       9 log.go:172] (0xc002909340) (0xc00139ac80) Create stream
I0212 21:21:47.420779       9 log.go:172] (0xc002909340) (0xc00139ac80) Stream added, broadcasting: 1
I0212 21:21:47.423751       9 log.go:172] (0xc002909340) Reply frame received for 1
I0212 21:21:47.423773       9 log.go:172] (0xc002909340) (0xc000a39900) Create stream
I0212 21:21:47.423779       9 log.go:172] (0xc002909340) (0xc000a39900) Stream added, broadcasting: 3
I0212 21:21:47.425495       9 log.go:172] (0xc002909340) Reply frame received for 3
I0212 21:21:47.425541       9 log.go:172] (0xc002909340) (0xc000bcce60) Create stream
I0212 21:21:47.425557       9 log.go:172] (0xc002909340) (0xc000bcce60) Stream added, broadcasting: 5
I0212 21:21:47.430922       9 log.go:172] (0xc002909340) Reply frame received for 5
I0212 21:21:47.499330       9 log.go:172] (0xc002909340) Data frame received for 3
I0212 21:21:47.499362       9 log.go:172] (0xc000a39900) (3) Data frame handling
I0212 21:21:47.499376       9 log.go:172] (0xc000a39900) (3) Data frame sent
I0212 21:21:47.564769       9 log.go:172] (0xc002909340) (0xc000a39900) Stream removed, broadcasting: 3
I0212 21:21:47.564837       9 log.go:172] (0xc002909340) Data frame received for 1
I0212 21:21:47.564861       9 log.go:172] (0xc00139ac80) (1) Data frame handling
I0212 21:21:47.564874       9 log.go:172] (0xc00139ac80) (1) Data frame sent
I0212 21:21:47.564884       9 log.go:172] (0xc002909340) (0xc00139ac80) Stream removed, broadcasting: 1
I0212 21:21:47.564927       9 log.go:172] (0xc002909340) (0xc000bcce60) Stream removed, broadcasting: 5
I0212 21:21:47.564987       9 log.go:172] (0xc002909340) Go away received
I0212 21:21:47.565028       9 log.go:172] (0xc002909340) (0xc00139ac80) Stream removed, broadcasting: 1
I0212 21:21:47.565049       9 log.go:172] (0xc002909340) (0xc000a39900) Stream removed, broadcasting: 3
I0212 21:21:47.565062       9 log.go:172] (0xc002909340) (0xc000bcce60) Stream removed, broadcasting: 5
Feb 12 21:21:47.565: INFO: Deleting pod dns-2431...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:21:47.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2431" for this suite.

• [SLOW TEST:6.603 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":277,"completed":161,"skipped":2709,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:21:47.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:22:04.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2087" for this suite.

• [SLOW TEST:16.831 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":277,"completed":162,"skipped":2766,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:22:04.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 21:22:20.880: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:20.887: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:22.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:22.897: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:24.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:24.894: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:26.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:26.893: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:28.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:28.892: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:30.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:30.894: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 12 21:22:32.887: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 12 21:22:32.893: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:22:32.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3067" for this suite.

• [SLOW TEST:28.477 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":277,"completed":163,"skipped":2768,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:22:32.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 12 21:22:43.084: INFO: &Pod{ObjectMeta:{send-events-849d78c3-e0c6-4bd9-9ffe-90e59d718f20  events-6704 /api/v1/namespaces/events-6704/pods/send-events-849d78c3-e0c6-4bd9-9ffe-90e59d718f20 226184bf-8abf-4136-868a-9d12f5920840 8023394 0 2020-02-12 21:22:33 +0000 UTC   map[name:foo time:50602682] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hph2c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hph2c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hph2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:22:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:22:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:22:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 21:22:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-12 21:22:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 21:22:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://4796297c730049151fb6c0d27660208955fff996cd5d5e2f86ae1c815b0a6ea0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 12 21:22:45.092: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 12 21:22:47.143: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:22:47.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6704" for this suite.

• [SLOW TEST:14.282 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":277,"completed":164,"skipped":2798,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:22:47.192: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 12 21:22:53.845: INFO: Successfully updated pod "pod-update-bee4604c-f350-4152-99a5-1f602f996d02"
STEP: verifying the updated pod is in kubernetes
Feb 12 21:22:53.886: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:22:53.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3583" for this suite.

• [SLOW TEST:6.708 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":277,"completed":165,"skipped":2802,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:22:53.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:22:54.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3" in namespace "projected-2282" to be "Succeeded or Failed"
Feb 12 21:22:54.015: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.401777ms
Feb 12 21:22:56.020: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01637409s
Feb 12 21:22:58.025: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021334161s
Feb 12 21:23:00.029: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025245967s
Feb 12 21:23:02.077: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072806565s
STEP: Saw pod success
Feb 12 21:23:02.077: INFO: Pod "downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3" satisfied condition "Succeeded or Failed"
Feb 12 21:23:02.092: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3 container client-container: 
STEP: delete the pod
Feb 12 21:23:02.209: INFO: Waiting for pod downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3 to disappear
Feb 12 21:23:02.214: INFO: Pod downwardapi-volume-3cc0e8f0-551d-4ba4-895f-c4a53e8fd9a3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:23:02.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2282" for this suite.

• [SLOW TEST:8.365 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":166,"skipped":2810,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:23:02.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:23:03.102: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 12 21:23:05.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:07.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:09.133: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:11.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139383, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:23:14.182: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:23:14.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:23:15.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-8801" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:13.248 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":277,"completed":167,"skipped":2812,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:23:15.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:23:15.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a" in namespace "projected-2403" to be "Succeeded or Failed"
Feb 12 21:23:15.607: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.517558ms
Feb 12 21:23:17.614: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014703595s
Feb 12 21:23:19.622: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021978114s
Feb 12 21:23:21.627: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027021934s
Feb 12 21:23:23.633: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.033752226s
Feb 12 21:23:25.639: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.03974526s
STEP: Saw pod success
Feb 12 21:23:25.639: INFO: Pod "downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a" satisfied condition "Succeeded or Failed"
Feb 12 21:23:25.643: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a container client-container: 
STEP: delete the pod
Feb 12 21:23:25.778: INFO: Waiting for pod downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a to disappear
Feb 12 21:23:25.795: INFO: Pod downwardapi-volume-9e01c9b6-fc8c-42d0-8d5d-c378ede5096a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:23:25.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2403" for this suite.

• [SLOW TEST:10.316 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":277,"completed":168,"skipped":2822,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:23:25.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:23:26.350: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 21:23:28.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:30.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:32.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139406, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:23:35.383: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Feb 12 21:23:35.420: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:23:35.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3607" for this suite.
STEP: Destroying namespace "webhook-3607-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.770 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":277,"completed":169,"skipped":2868,"failed":0}
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:23:35.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 12 21:23:46.277: INFO: Successfully updated pod "pod-update-activedeadlineseconds-c501862a-a631-4083-9134-c6fc80f51702"
Feb 12 21:23:46.277: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-c501862a-a631-4083-9134-c6fc80f51702" in namespace "pods-1387" to be "terminated due to deadline exceeded"
Feb 12 21:23:46.297: INFO: Pod "pod-update-activedeadlineseconds-c501862a-a631-4083-9134-c6fc80f51702": Phase="Running", Reason="", readiness=true. Elapsed: 20.074205ms
Feb 12 21:23:48.304: INFO: Pod "pod-update-activedeadlineseconds-c501862a-a631-4083-9134-c6fc80f51702": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.026918864s
Feb 12 21:23:48.304: INFO: Pod "pod-update-activedeadlineseconds-c501862a-a631-4083-9134-c6fc80f51702" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:23:48.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1387" for this suite.

• [SLOW TEST:12.718 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":277,"completed":170,"skipped":2868,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:23:48.321: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 21:23:49.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 21:23:51.659: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:53.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 21:23:55.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717139429, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 21:23:58.695: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:24:11.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2706" for this suite.
STEP: Destroying namespace "webhook-2706-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:22.939 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":277,"completed":171,"skipped":2876,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:24:11.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1794
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1794
STEP: Creating statefulset with conflicting port in namespace statefulset-1794
STEP: Waiting until pod test-pod will start running in namespace statefulset-1794
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1794
Feb 12 21:24:22.131: INFO: Observed stateful pod in namespace: statefulset-1794, name: ss-0, uid: 00c8c563-7288-4bb9-85e0-4dfc41aca896, status phase: Pending. Waiting for statefulset controller to delete.
Feb 12 21:29:22.132: FAIL: Pod ss-0 expected to be re-created at least once

Full Stack Trace
k8s.io/kubernetes/test/e2e/apps.glob..func10.2.12()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762 +0x12b2
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001912f00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:111 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc001912f00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc001912f00, 0x4cf4ab0)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 12 21:29:22.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-1794'
Feb 12 21:29:22.346: INFO: stderr: ""
Feb 12 21:29:22.346: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-1794\nPriority:       0\nNode:           jerma-server-mvvl6gufaqub/\nLabels:         baz=blah\n                controller-revision-hash=ss-5d68d76f44\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nIPs:            \nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Image:        docker.io/library/httpd:2.4.38-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2g8pp (ro)\nVolumes:\n  default-token-2g8pp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-2g8pp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                                Message\n  ----     ------            ----  ----                                -------\n  Warning  PodFitsHostPorts  5m8s  kubelet, jerma-server-mvvl6gufaqub  Predicate PodFitsHostPorts failed\n"
Feb 12 21:29:22.346: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-1794
Priority:       0
Node:           jerma-server-mvvl6gufaqub/
Labels:         baz=blah
                controller-revision-hash=ss-5d68d76f44
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
IPs:            
Controlled By:  StatefulSet/ss
Containers:
  webserver:
    Image:        docker.io/library/httpd:2.4.38-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2g8pp (ro)
Volumes:
  default-token-2g8pp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2g8pp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                                Message
  ----     ------            ----  ----                                -------
  Warning  PodFitsHostPorts  5m8s  kubelet, jerma-server-mvvl6gufaqub  Predicate PodFitsHostPorts failed

Feb 12 21:29:22.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-1794 --tail=100'
Feb 12 21:29:22.525: INFO: rc: 1
Feb 12 21:29:22.525: INFO: 
Last 100 log lines of ss-0:

Feb 12 21:29:22.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-1794'
Feb 12 21:29:22.650: INFO: stderr: ""
Feb 12 21:29:22.650: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-1794\nPriority:     0\nNode:         jerma-server-mvvl6gufaqub/10.96.1.234\nStart Time:   Wed, 12 Feb 2020 21:24:11 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.32.0.4\nIPs:\n  IP:  10.32.0.4\nContainers:\n  webserver:\n    Container ID:   docker://493d642328b156a9a1c8baff7fb2603da2fe444b1cb6af8aacbf3336aa256a3d\n    Image:          docker.io/library/httpd:2.4.38-alpine\n    Image ID:       docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Wed, 12 Feb 2020 21:24:20 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2g8pp (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-2g8pp:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-2g8pp\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                                Message\n  ----    ------   ----  ----                                -------\n  Normal  Pulled   5m5s  kubelet, jerma-server-mvvl6gufaqub  Container image \"docker.io/library/httpd:2.4.38-alpine\" already present on machine\n  Normal  Created  5m3s  kubelet, jerma-server-mvvl6gufaqub  Created container webserver\n  Normal  Started  5m2s  kubelet, jerma-server-mvvl6gufaqub  Started container webserver\n"
Feb 12 21:29:22.651: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-1794
Priority:     0
Node:         jerma-server-mvvl6gufaqub/10.96.1.234
Start Time:   Wed, 12 Feb 2020 21:24:11 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.32.0.4
IPs:
  IP:  10.32.0.4
Containers:
  webserver:
    Container ID:   docker://493d642328b156a9a1c8baff7fb2603da2fe444b1cb6af8aacbf3336aa256a3d
    Image:          docker.io/library/httpd:2.4.38-alpine
    Image ID:       docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Wed, 12 Feb 2020 21:24:20 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-2g8pp (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-2g8pp:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-2g8pp
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                                Message
  ----    ------   ----  ----                                -------
  Normal  Pulled   5m5s  kubelet, jerma-server-mvvl6gufaqub  Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
  Normal  Created  5m3s  kubelet, jerma-server-mvvl6gufaqub  Created container webserver
  Normal  Started  5m2s  kubelet, jerma-server-mvvl6gufaqub  Started container webserver

Feb 12 21:29:22.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-1794 --tail=100'
Feb 12 21:29:22.753: INFO: stderr: ""
Feb 12 21:29:22.753: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.4. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.4. Set the 'ServerName' directive globally to suppress this message\n[Wed Feb 12 21:24:20.990130 2020] [mpm_event:notice] [pid 1:tid 140213903989608] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Feb 12 21:24:20.990633 2020] [core:notice] [pid 1:tid 140213903989608] AH00094: Command line: 'httpd -D FOREGROUND'\n"
Feb 12 21:29:22.753: INFO: 
Last 100 log lines of test-pod:
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.4. Set the 'ServerName' directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.32.0.4. Set the 'ServerName' directive globally to suppress this message
[Wed Feb 12 21:24:20.990130 2020] [mpm_event:notice] [pid 1:tid 140213903989608] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations
[Wed Feb 12 21:24:20.990633 2020] [core:notice] [pid 1:tid 140213903989608] AH00094: Command line: 'httpd -D FOREGROUND'

Feb 12 21:29:22.753: INFO: Deleting all statefulset in ns statefulset-1794
Feb 12 21:29:22.756: INFO: Scaling statefulset ss to 0
Feb 12 21:29:32.792: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 21:29:32.796: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "statefulset-1794".
STEP: Found 9 events.
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:12 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:12 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-1794/ss is recreating failed Pod ss-0
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:12 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:12 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:13 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:14 +0000 UTC - event for ss-0: {kubelet jerma-server-mvvl6gufaqub} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:17 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:19 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Created: Created container webserver
Feb 12 21:29:32.845: INFO: At 2020-02-12 21:24:20 +0000 UTC - event for test-pod: {kubelet jerma-server-mvvl6gufaqub} Started: Started container webserver
Feb 12 21:29:32.851: INFO: POD       NODE                       PHASE    GRACE  CONDITIONS
Feb 12 21:29:32.851: INFO: test-pod  jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 21:24:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 21:24:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 21:24:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 21:24:11 +0000 UTC  }]
Feb 12 21:29:32.851: INFO: 
Feb 12 21:29:32.856: INFO: 
Logging node info for node jerma-node
Feb 12 21:29:32.860: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 8024050 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-12 21:24:51 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-12 21:24:51 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-12 21:24:51 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-12 21:24:51 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 12 21:29:32.860: INFO: 
Logging kubelet events for node jerma-node
Feb 12 21:29:32.863: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 12 21:29:32.891: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.892: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:29:32.892: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 12 21:29:32.892: INFO: 	Container weave ready: true, restart count 1
Feb 12 21:29:32.892: INFO: 	Container weave-npc ready: true, restart count 0
W0212 21:29:32.896243       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 21:29:32.934: INFO: 
Latency metrics for node jerma-node
Feb 12 21:29:32.934: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 12 21:29:32.939: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 8024410 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-12 21:27:32 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-12 21:27:32 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-12 21:27:32 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-12 21:27:32 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 12 21:29:32.939: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 12 21:29:32.945: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 12 21:29:32.953: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:29:32.953: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:29:32.953: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 21:29:32.953: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:29:32.953: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container weave ready: true, restart count 0
Feb 12 21:29:32.953: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:29:32.953: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container kube-scheduler ready: true, restart count 9
Feb 12 21:29:32.953: INFO: test-pod started at 2020-02-12 21:24:11 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container webserver ready: true, restart count 0
Feb 12 21:29:32.953: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 21:29:32.953: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 12 21:29:32.953: INFO: 	Container etcd ready: true, restart count 1
W0212 21:29:32.957509       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 21:29:33.010: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 12 21:29:33.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1794" for this suite.

• Failure [321.761 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

    Feb 12 21:29:22.132: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762
------------------------------
{"msg":"FAILED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":277,"completed":171,"skipped":2888,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:29:33.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-5869a0d2-d869-4198-b577-fb877cde43c8
STEP: Creating configMap with name cm-test-opt-upd-3b05c6de-4124-4bce-94ad-66ef02e360c4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5869a0d2-d869-4198-b577-fb877cde43c8
STEP: Updating configmap cm-test-opt-upd-3b05c6de-4124-4bce-94ad-66ef02e360c4
STEP: Creating configMap with name cm-test-opt-create-aeaad7e2-093d-418a-862d-37e637d4a4cd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:29:49.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4264" for this suite.

• [SLOW TEST:16.317 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":172,"skipped":2902,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:29:49.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:30:05.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4651" for this suite.

• [SLOW TEST:16.285 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":277,"completed":173,"skipped":2941,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:30:05.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:30:05.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee" in namespace "downward-api-3289" to be "Succeeded or Failed"
Feb 12 21:30:05.846: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee": Phase="Pending", Reason="", readiness=false. Elapsed: 133.582776ms
Feb 12 21:30:07.862: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149895671s
Feb 12 21:30:09.870: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158195587s
Feb 12 21:30:11.877: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16494482s
Feb 12 21:30:13.897: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.184636696s
STEP: Saw pod success
Feb 12 21:30:13.897: INFO: Pod "downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee" satisfied condition "Succeeded or Failed"
Feb 12 21:30:13.900: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee container client-container: 
STEP: delete the pod
Feb 12 21:30:13.936: INFO: Waiting for pod downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee to disappear
Feb 12 21:30:13.939: INFO: Pod downwardapi-volume-4e2b80e3-c2ca-4e32-a7f6-4098062b61ee no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:30:13.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3289" for this suite.

• [SLOW TEST:8.411 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":277,"completed":174,"skipped":2952,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:30:14.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-527g
STEP: Creating a pod to test atomic-volume-subpath
Feb 12 21:30:14.223: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-527g" in namespace "subpath-8285" to be "Succeeded or Failed"
Feb 12 21:30:14.244: INFO: Pod "pod-subpath-test-secret-527g": Phase="Pending", Reason="", readiness=false. Elapsed: 21.502675ms
Feb 12 21:30:16.250: INFO: Pod "pod-subpath-test-secret-527g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027257471s
Feb 12 21:30:18.256: INFO: Pod "pod-subpath-test-secret-527g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033188468s
Feb 12 21:30:20.261: INFO: Pod "pod-subpath-test-secret-527g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038556579s
Feb 12 21:30:22.268: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 8.044998215s
Feb 12 21:30:24.274: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 10.051339313s
Feb 12 21:30:26.283: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 12.060473732s
Feb 12 21:30:28.291: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 14.067794071s
Feb 12 21:30:30.297: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 16.074408398s
Feb 12 21:30:32.303: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 18.080137769s
Feb 12 21:30:34.308: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 20.085361668s
Feb 12 21:30:36.321: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 22.097909353s
Feb 12 21:30:38.330: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 24.106965419s
Feb 12 21:30:40.336: INFO: Pod "pod-subpath-test-secret-527g": Phase="Running", Reason="", readiness=true. Elapsed: 26.113230564s
Feb 12 21:30:42.344: INFO: Pod "pod-subpath-test-secret-527g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.120858129s
STEP: Saw pod success
Feb 12 21:30:42.344: INFO: Pod "pod-subpath-test-secret-527g" satisfied condition "Succeeded or Failed"
Feb 12 21:30:42.346: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-527g container test-container-subpath-secret-527g: 
STEP: delete the pod
Feb 12 21:30:42.400: INFO: Waiting for pod pod-subpath-test-secret-527g to disappear
Feb 12 21:30:42.404: INFO: Pod pod-subpath-test-secret-527g no longer exists
STEP: Deleting pod pod-subpath-test-secret-527g
Feb 12 21:30:42.404: INFO: Deleting pod "pod-subpath-test-secret-527g" in namespace "subpath-8285"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:30:42.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8285" for this suite.

• [SLOW TEST:28.478 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":277,"completed":175,"skipped":2981,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:30:42.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6998
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-6998
Feb 12 21:30:42.824: INFO: Found 0 stateful pods, waiting for 1
Feb 12 21:30:52.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 12 21:30:52.864: INFO: Deleting all statefulset in ns statefulset-6998
Feb 12 21:30:52.872: INFO: Scaling statefulset ss to 0
Feb 12 21:31:13.001: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 21:31:13.003: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:13.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6998" for this suite.

• [SLOW TEST:30.508 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":277,"completed":176,"skipped":2984,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:13.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 12 21:31:13.118: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 21:31:13.136: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 21:31:13.139: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 12 21:31:13.146: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.146: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:31:13.146: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 12 21:31:13.146: INFO: 	Container weave ready: true, restart count 1
Feb 12 21:31:13.146: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:31:13.146: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 12 21:31:13.161: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container kube-scheduler ready: true, restart count 9
Feb 12 21:31:13.161: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 21:31:13.161: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container etcd ready: true, restart count 1
Feb 12 21:31:13.161: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:31:13.161: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:31:13.161: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 21:31:13.161: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:31:13.161: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 12 21:31:13.161: INFO: 	Container weave ready: true, restart count 0
Feb 12 21:31:13.161: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 12 21:31:13.244: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 12 21:31:13.244: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 12 21:31:13.244: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
STEP: Starting Pods to consume most of the cluster CPU.
Feb 12 21:31:13.244: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Feb 12 21:31:13.253: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0.15f2c50bb9b88dc0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4947/filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0.15f2c50cda03cd5c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0.15f2c50da2710e49], Reason = [Created], Message = [Created container filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0.15f2c50dc5553186], Reason = [Started], Message = [Started container filler-pod-b067ad8d-7d83-4512-a0dd-3f9af77b34c0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1.15f2c50bb69a4116], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4947/filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1.15f2c50cbb5dc45c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1.15f2c50d7180c4f8], Reason = [Created], Message = [Created container filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1.15f2c50d98e91d9b], Reason = [Started], Message = [Started container filler-pod-cb6b3b91-bd78-4f87-886b-43def2de13b1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f2c50e0fce5532], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:24.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4947" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:11.435 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":277,"completed":177,"skipped":2992,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:24.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Feb 12 21:31:24.564: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Feb 12 21:31:24.627: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Feb 12 21:31:24.627: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Feb 12 21:31:24.642: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Feb 12 21:31:24.642: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Feb 12 21:31:24.701: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Feb 12 21:31:24.701: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Feb 12 21:31:31.791: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:31.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-5870" for this suite.

• [SLOW TEST:7.559 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":277,"completed":178,"skipped":3013,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:32.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:33.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7354" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":277,"completed":179,"skipped":3014,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:34.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 12 21:31:34.533: INFO: Waiting up to 5m0s for pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5" in namespace "emptydir-8629" to be "Succeeded or Failed"
Feb 12 21:31:34.545: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5": Phase="Pending", Reason="", readiness=false. Elapsed: 11.112972ms
Feb 12 21:31:36.553: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019988269s
Feb 12 21:31:38.566: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033026927s
Feb 12 21:31:40.614: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080228524s
Feb 12 21:31:42.620: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.086553915s
STEP: Saw pod success
Feb 12 21:31:42.620: INFO: Pod "pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5" satisfied condition "Succeeded or Failed"
Feb 12 21:31:42.627: INFO: Trying to get logs from node jerma-node pod pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5 container test-container: 
STEP: delete the pod
Feb 12 21:31:42.694: INFO: Waiting for pod pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5 to disappear
Feb 12 21:31:42.862: INFO: Pod pod-6b7a6ba7-5a8e-41c8-aeac-de683f6a30e5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:42.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8629" for this suite.

• [SLOW TEST:8.512 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":180,"skipped":3015,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:42.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Feb 12 21:31:43.145: INFO: Waiting up to 5m0s for pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c" in namespace "downward-api-1232" to be "Succeeded or Failed"
Feb 12 21:31:43.152: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.293761ms
Feb 12 21:31:45.157: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012242582s
Feb 12 21:31:47.175: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030347918s
Feb 12 21:31:49.181: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036110943s
Feb 12 21:31:51.187: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04215081s
STEP: Saw pod success
Feb 12 21:31:51.187: INFO: Pod "downward-api-2b91877f-e635-4fb8-b645-46e80263a87c" satisfied condition "Succeeded or Failed"
Feb 12 21:31:51.192: INFO: Trying to get logs from node jerma-node pod downward-api-2b91877f-e635-4fb8-b645-46e80263a87c container dapi-container: 
STEP: delete the pod
Feb 12 21:31:51.231: INFO: Waiting for pod downward-api-2b91877f-e635-4fb8-b645-46e80263a87c to disappear
Feb 12 21:31:51.240: INFO: Pod downward-api-2b91877f-e635-4fb8-b645-46e80263a87c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:31:51.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1232" for this suite.

• [SLOW TEST:8.722 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":277,"completed":181,"skipped":3015,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:31:51.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 12 21:31:52.801: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 21:31:53.112: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 21:31:53.130: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 12 21:31:53.140: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.140: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:31:53.140: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 12 21:31:53.140: INFO: 	Container weave ready: true, restart count 1
Feb 12 21:31:53.140: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:31:53.140: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 12 21:31:53.151: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 21:31:53.151: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:31:53.151: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container weave ready: true, restart count 0
Feb 12 21:31:53.151: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:31:53.151: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container kube-scheduler ready: true, restart count 9
Feb 12 21:31:53.151: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 21:31:53.151: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container etcd ready: true, restart count 1
Feb 12 21:31:53.151: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:31:53.151: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:31:53.151: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c1c4cd6a-d74a-42fb-a4ba-9eea2b765485 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-c1c4cd6a-d74a-42fb-a4ba-9eea2b765485 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c1c4cd6a-d74a-42fb-a4ba-9eea2b765485
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:37:09.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3199" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:318.162 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":277,"completed":182,"skipped":3020,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:37:09.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-4911
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-4911
I0212 21:37:09.980149       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4911, replica count: 2
I0212 21:37:13.030662       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:37:16.031003       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:37:19.031254       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0212 21:37:22.031547       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 12 21:37:22.031: INFO: Creating new exec pod
Feb 12 21:37:31.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4911 execpodwhqns -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 12 21:37:33.980: INFO: stderr: "I0212 21:37:33.687703    2741 log.go:172] (0xc00010ef20) (0xc00069db80) Create stream\nI0212 21:37:33.687788    2741 log.go:172] (0xc00010ef20) (0xc00069db80) Stream added, broadcasting: 1\nI0212 21:37:33.703476    2741 log.go:172] (0xc00010ef20) Reply frame received for 1\nI0212 21:37:33.703565    2741 log.go:172] (0xc00010ef20) (0xc0008880a0) Create stream\nI0212 21:37:33.703582    2741 log.go:172] (0xc00010ef20) (0xc0008880a0) Stream added, broadcasting: 3\nI0212 21:37:33.705416    2741 log.go:172] (0xc00010ef20) Reply frame received for 3\nI0212 21:37:33.705471    2741 log.go:172] (0xc00010ef20) (0xc000558000) Create stream\nI0212 21:37:33.705504    2741 log.go:172] (0xc00010ef20) (0xc000558000) Stream added, broadcasting: 5\nI0212 21:37:33.707169    2741 log.go:172] (0xc00010ef20) Reply frame received for 5\nI0212 21:37:33.805992    2741 log.go:172] (0xc00010ef20) Data frame received for 5\nI0212 21:37:33.806195    2741 log.go:172] (0xc000558000) (5) Data frame handling\nI0212 21:37:33.806307    2741 log.go:172] (0xc000558000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0212 21:37:33.816180    2741 log.go:172] (0xc00010ef20) Data frame received for 5\nI0212 21:37:33.816232    2741 log.go:172] (0xc000558000) (5) Data frame handling\nI0212 21:37:33.816263    2741 log.go:172] (0xc000558000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0212 21:37:33.959667    2741 log.go:172] (0xc00010ef20) (0xc0008880a0) Stream removed, broadcasting: 3\nI0212 21:37:33.960095    2741 log.go:172] (0xc00010ef20) Data frame received for 1\nI0212 21:37:33.960122    2741 log.go:172] (0xc00069db80) (1) Data frame handling\nI0212 21:37:33.960170    2741 log.go:172] (0xc00069db80) (1) Data frame sent\nI0212 21:37:33.960480    2741 log.go:172] (0xc00010ef20) (0xc00069db80) Stream removed, broadcasting: 1\nI0212 21:37:33.960645    2741 log.go:172] (0xc00010ef20) (0xc000558000) Stream removed, broadcasting: 5\nI0212 21:37:33.960678    2741 log.go:172] (0xc00010ef20) Go away received\nI0212 21:37:33.961978    2741 log.go:172] (0xc00010ef20) (0xc00069db80) Stream removed, broadcasting: 1\nI0212 21:37:33.961999    2741 log.go:172] (0xc00010ef20) (0xc0008880a0) Stream removed, broadcasting: 3\nI0212 21:37:33.962008    2741 log.go:172] (0xc00010ef20) (0xc000558000) Stream removed, broadcasting: 5\n"
Feb 12 21:37:33.980: INFO: stdout: ""
Feb 12 21:37:33.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4911 execpodwhqns -- /bin/sh -x -c nc -zv -t -w 2 10.96.77.2 80'
Feb 12 21:37:34.362: INFO: stderr: "I0212 21:37:34.172159    2773 log.go:172] (0xc000b71130) (0xc000587ea0) Create stream\nI0212 21:37:34.172326    2773 log.go:172] (0xc000b71130) (0xc000587ea0) Stream added, broadcasting: 1\nI0212 21:37:34.176890    2773 log.go:172] (0xc000b71130) Reply frame received for 1\nI0212 21:37:34.176924    2773 log.go:172] (0xc000b71130) (0xc000ba00a0) Create stream\nI0212 21:37:34.176932    2773 log.go:172] (0xc000b71130) (0xc000ba00a0) Stream added, broadcasting: 3\nI0212 21:37:34.177801    2773 log.go:172] (0xc000b71130) Reply frame received for 3\nI0212 21:37:34.177834    2773 log.go:172] (0xc000b71130) (0xc000ba0140) Create stream\nI0212 21:37:34.177845    2773 log.go:172] (0xc000b71130) (0xc000ba0140) Stream added, broadcasting: 5\nI0212 21:37:34.178736    2773 log.go:172] (0xc000b71130) Reply frame received for 5\nI0212 21:37:34.258768    2773 log.go:172] (0xc000b71130) Data frame received for 5\nI0212 21:37:34.258870    2773 log.go:172] (0xc000ba0140) (5) Data frame handling\nI0212 21:37:34.258896    2773 log.go:172] (0xc000ba0140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.77.2 80\nConnection to 10.96.77.2 80 port [tcp/http] succeeded!\nI0212 21:37:34.349486    2773 log.go:172] (0xc000b71130) (0xc000ba00a0) Stream removed, broadcasting: 3\nI0212 21:37:34.349775    2773 log.go:172] (0xc000b71130) Data frame received for 1\nI0212 21:37:34.349803    2773 log.go:172] (0xc000587ea0) (1) Data frame handling\nI0212 21:37:34.349830    2773 log.go:172] (0xc000587ea0) (1) Data frame sent\nI0212 21:37:34.350369    2773 log.go:172] (0xc000b71130) (0xc000587ea0) Stream removed, broadcasting: 1\nI0212 21:37:34.350639    2773 log.go:172] (0xc000b71130) (0xc000ba0140) Stream removed, broadcasting: 5\nI0212 21:37:34.350724    2773 log.go:172] (0xc000b71130) Go away received\nI0212 21:37:34.351996    2773 log.go:172] (0xc000b71130) (0xc000587ea0) Stream removed, broadcasting: 1\nI0212 21:37:34.352025    2773 log.go:172] (0xc000b71130) (0xc000ba00a0) Stream removed, broadcasting: 3\nI0212 21:37:34.352049    2773 log.go:172] (0xc000b71130) (0xc000ba0140) Stream removed, broadcasting: 5\n"
Feb 12 21:37:34.362: INFO: stdout: ""
Feb 12 21:37:34.362: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:37:34.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4911" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696

• [SLOW TEST:24.633 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":277,"completed":183,"skipped":3020,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:37:34.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3447
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Feb 12 21:37:34.550: INFO: Found 0 stateful pods, waiting for 3
Feb 12 21:37:44.590: INFO: Found 2 stateful pods, waiting for 3
Feb 12 21:37:54.819: INFO: Found 2 stateful pods, waiting for 3
Feb 12 21:38:04.565: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:38:04.565: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:38:04.565: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 21:38:14.563: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:38:14.563: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:38:14.563: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:38:14.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3447 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 12 21:38:14.953: INFO: stderr: "I0212 21:38:14.721218    2793 log.go:172] (0xc0000fafd0) (0xc00048eb40) Create stream\nI0212 21:38:14.721368    2793 log.go:172] (0xc0000fafd0) (0xc00048eb40) Stream added, broadcasting: 1\nI0212 21:38:14.729244    2793 log.go:172] (0xc0000fafd0) Reply frame received for 1\nI0212 21:38:14.729290    2793 log.go:172] (0xc0000fafd0) (0xc0005d9e00) Create stream\nI0212 21:38:14.729298    2793 log.go:172] (0xc0000fafd0) (0xc0005d9e00) Stream added, broadcasting: 3\nI0212 21:38:14.732058    2793 log.go:172] (0xc0000fafd0) Reply frame received for 3\nI0212 21:38:14.732087    2793 log.go:172] (0xc0000fafd0) (0xc0008f4000) Create stream\nI0212 21:38:14.732113    2793 log.go:172] (0xc0000fafd0) (0xc0008f4000) Stream added, broadcasting: 5\nI0212 21:38:14.733141    2793 log.go:172] (0xc0000fafd0) Reply frame received for 5\nI0212 21:38:14.829440    2793 log.go:172] (0xc0000fafd0) Data frame received for 5\nI0212 21:38:14.829472    2793 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0212 21:38:14.829487    2793 log.go:172] (0xc0008f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 21:38:14.870577    2793 log.go:172] (0xc0000fafd0) Data frame received for 3\nI0212 21:38:14.870632    2793 log.go:172] (0xc0005d9e00) (3) Data frame handling\nI0212 21:38:14.870669    2793 log.go:172] (0xc0005d9e00) (3) Data frame sent\nI0212 21:38:14.947954    2793 log.go:172] (0xc0000fafd0) (0xc0005d9e00) Stream removed, broadcasting: 3\nI0212 21:38:14.948030    2793 log.go:172] (0xc0000fafd0) Data frame received for 1\nI0212 21:38:14.948040    2793 log.go:172] (0xc00048eb40) (1) Data frame handling\nI0212 21:38:14.948050    2793 log.go:172] (0xc00048eb40) (1) Data frame sent\nI0212 21:38:14.948054    2793 log.go:172] (0xc0000fafd0) (0xc00048eb40) Stream removed, broadcasting: 1\nI0212 21:38:14.948516    2793 log.go:172] (0xc0000fafd0) (0xc0008f4000) Stream removed, broadcasting: 5\nI0212 21:38:14.948549    2793 log.go:172] (0xc0000fafd0) (0xc00048eb40) Stream removed, broadcasting: 1\nI0212 21:38:14.948555    2793 log.go:172] (0xc0000fafd0) (0xc0005d9e00) Stream removed, broadcasting: 3\nI0212 21:38:14.948559    2793 log.go:172] (0xc0000fafd0) (0xc0008f4000) Stream removed, broadcasting: 5\n"
Feb 12 21:38:14.953: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 12 21:38:14.953: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 12 21:38:25.011: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 12 21:38:35.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3447 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 12 21:38:35.457: INFO: stderr: "I0212 21:38:35.237808    2805 log.go:172] (0xc000978dc0) (0xc000940460) Create stream\nI0212 21:38:35.237907    2805 log.go:172] (0xc000978dc0) (0xc000940460) Stream added, broadcasting: 1\nI0212 21:38:35.249210    2805 log.go:172] (0xc000978dc0) Reply frame received for 1\nI0212 21:38:35.249286    2805 log.go:172] (0xc000978dc0) (0xc00066e6e0) Create stream\nI0212 21:38:35.249301    2805 log.go:172] (0xc000978dc0) (0xc00066e6e0) Stream added, broadcasting: 3\nI0212 21:38:35.250886    2805 log.go:172] (0xc000978dc0) Reply frame received for 3\nI0212 21:38:35.250921    2805 log.go:172] (0xc000978dc0) (0xc00047d360) Create stream\nI0212 21:38:35.250930    2805 log.go:172] (0xc000978dc0) (0xc00047d360) Stream added, broadcasting: 5\nI0212 21:38:35.252332    2805 log.go:172] (0xc000978dc0) Reply frame received for 5\nI0212 21:38:35.353159    2805 log.go:172] (0xc000978dc0) Data frame received for 5\nI0212 21:38:35.353222    2805 log.go:172] (0xc00047d360) (5) Data frame handling\nI0212 21:38:35.353390    2805 log.go:172] (0xc00047d360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 21:38:35.354326    2805 log.go:172] (0xc000978dc0) Data frame received for 3\nI0212 21:38:35.354367    2805 log.go:172] (0xc00066e6e0) (3) Data frame handling\nI0212 21:38:35.354396    2805 log.go:172] (0xc00066e6e0) (3) Data frame sent\nI0212 21:38:35.445554    2805 log.go:172] (0xc000978dc0) (0xc00066e6e0) Stream removed, broadcasting: 3\nI0212 21:38:35.445721    2805 log.go:172] (0xc000978dc0) Data frame received for 1\nI0212 21:38:35.445928    2805 log.go:172] (0xc000978dc0) (0xc00047d360) Stream removed, broadcasting: 5\nI0212 21:38:35.446167    2805 log.go:172] (0xc000940460) (1) Data frame handling\nI0212 21:38:35.446208    2805 log.go:172] (0xc000940460) (1) Data frame sent\nI0212 21:38:35.446230    2805 log.go:172] (0xc000978dc0) (0xc000940460) Stream removed, broadcasting: 1\nI0212 21:38:35.446254    2805 log.go:172] (0xc000978dc0) Go away received\nI0212 21:38:35.447877    2805 log.go:172] (0xc000978dc0) (0xc000940460) Stream removed, broadcasting: 1\nI0212 21:38:35.448005    2805 log.go:172] (0xc000978dc0) (0xc00066e6e0) Stream removed, broadcasting: 3\nI0212 21:38:35.448077    2805 log.go:172] (0xc000978dc0) (0xc00047d360) Stream removed, broadcasting: 5\n"
Feb 12 21:38:35.457: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 12 21:38:35.457: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 12 21:38:45.557: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:38:45.557: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:38:45.557: INFO: Waiting for Pod statefulset-3447/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:38:45.557: INFO: Waiting for Pod statefulset-3447/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:38:55.565: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:38:55.565: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:38:55.565: INFO: Waiting for Pod statefulset-3447/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:39:05.567: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:39:05.567: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:39:15.572: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:39:15.572: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:39:25.566: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 12 21:39:35.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3447 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 12 21:39:35.975: INFO: stderr: "I0212 21:39:35.762519    2825 log.go:172] (0xc000ac20b0) (0xc000a6c0a0) Create stream\nI0212 21:39:35.762661    2825 log.go:172] (0xc000ac20b0) (0xc000a6c0a0) Stream added, broadcasting: 1\nI0212 21:39:35.766065    2825 log.go:172] (0xc000ac20b0) Reply frame received for 1\nI0212 21:39:35.766093    2825 log.go:172] (0xc000ac20b0) (0xc000a740a0) Create stream\nI0212 21:39:35.766098    2825 log.go:172] (0xc000ac20b0) (0xc000a740a0) Stream added, broadcasting: 3\nI0212 21:39:35.767499    2825 log.go:172] (0xc000ac20b0) Reply frame received for 3\nI0212 21:39:35.767523    2825 log.go:172] (0xc000ac20b0) (0xc0009aa000) Create stream\nI0212 21:39:35.767536    2825 log.go:172] (0xc000ac20b0) (0xc0009aa000) Stream added, broadcasting: 5\nI0212 21:39:35.769054    2825 log.go:172] (0xc000ac20b0) Reply frame received for 5\nI0212 21:39:35.869638    2825 log.go:172] (0xc000ac20b0) Data frame received for 5\nI0212 21:39:35.869709    2825 log.go:172] (0xc0009aa000) (5) Data frame handling\nI0212 21:39:35.869753    2825 log.go:172] (0xc0009aa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0212 21:39:35.885870    2825 log.go:172] (0xc000ac20b0) Data frame received for 3\nI0212 21:39:35.885883    2825 log.go:172] (0xc000a740a0) (3) Data frame handling\nI0212 21:39:35.885892    2825 log.go:172] (0xc000a740a0) (3) Data frame sent\nI0212 21:39:35.966916    2825 log.go:172] (0xc000ac20b0) Data frame received for 1\nI0212 21:39:35.966972    2825 log.go:172] (0xc000a6c0a0) (1) Data frame handling\nI0212 21:39:35.966995    2825 log.go:172] (0xc000a6c0a0) (1) Data frame sent\nI0212 21:39:35.967027    2825 log.go:172] (0xc000ac20b0) (0xc000a6c0a0) Stream removed, broadcasting: 1\nI0212 21:39:35.967083    2825 log.go:172] (0xc000ac20b0) (0xc000a740a0) Stream removed, broadcasting: 3\nI0212 21:39:35.967175    2825 log.go:172] (0xc000ac20b0) (0xc0009aa000) Stream removed, broadcasting: 5\nI0212 21:39:35.968380    2825 log.go:172] (0xc000ac20b0) (0xc000a6c0a0) Stream removed, broadcasting: 1\nI0212 21:39:35.968415    2825 log.go:172] (0xc000ac20b0) (0xc000a740a0) Stream removed, broadcasting: 3\nI0212 21:39:35.968473    2825 log.go:172] (0xc000ac20b0) Go away received\nI0212 21:39:35.968532    2825 log.go:172] (0xc000ac20b0) (0xc0009aa000) Stream removed, broadcasting: 5\n"
Feb 12 21:39:35.975: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 12 21:39:35.975: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 12 21:39:46.032: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 12 21:39:56.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3447 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 12 21:39:56.405: INFO: stderr: "I0212 21:39:56.254262    2846 log.go:172] (0xc0007daa50) (0xc0007ce000) Create stream\nI0212 21:39:56.254364    2846 log.go:172] (0xc0007daa50) (0xc0007ce000) Stream added, broadcasting: 1\nI0212 21:39:56.257260    2846 log.go:172] (0xc0007daa50) Reply frame received for 1\nI0212 21:39:56.257290    2846 log.go:172] (0xc0007daa50) (0xc0007ce140) Create stream\nI0212 21:39:56.257297    2846 log.go:172] (0xc0007daa50) (0xc0007ce140) Stream added, broadcasting: 3\nI0212 21:39:56.258172    2846 log.go:172] (0xc0007daa50) Reply frame received for 3\nI0212 21:39:56.258214    2846 log.go:172] (0xc0007daa50) (0xc0007dc000) Create stream\nI0212 21:39:56.258222    2846 log.go:172] (0xc0007daa50) (0xc0007dc000) Stream added, broadcasting: 5\nI0212 21:39:56.259300    2846 log.go:172] (0xc0007daa50) Reply frame received for 5\nI0212 21:39:56.330136    2846 log.go:172] (0xc0007daa50) Data frame received for 5\nI0212 21:39:56.330180    2846 log.go:172] (0xc0007dc000) (5) Data frame handling\nI0212 21:39:56.330210    2846 log.go:172] (0xc0007dc000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0212 21:39:56.330440    2846 log.go:172] (0xc0007daa50) Data frame received for 3\nI0212 21:39:56.330450    2846 log.go:172] (0xc0007ce140) (3) Data frame handling\nI0212 21:39:56.330483    2846 log.go:172] (0xc0007ce140) (3) Data frame sent\nI0212 21:39:56.396647    2846 log.go:172] (0xc0007daa50) Data frame received for 1\nI0212 21:39:56.396812    2846 log.go:172] (0xc0007ce000) (1) Data frame handling\nI0212 21:39:56.396876    2846 log.go:172] (0xc0007ce000) (1) Data frame sent\nI0212 21:39:56.397011    2846 log.go:172] (0xc0007daa50) (0xc0007ce000) Stream removed, broadcasting: 1\nI0212 21:39:56.398633    2846 log.go:172] (0xc0007daa50) (0xc0007ce140) Stream removed, broadcasting: 3\nI0212 21:39:56.398751    2846 log.go:172] (0xc0007daa50) (0xc0007dc000) Stream removed, broadcasting: 5\nI0212 21:39:56.398915    2846 log.go:172] (0xc0007daa50) (0xc0007ce000) Stream removed, broadcasting: 1\nI0212 21:39:56.398935    2846 log.go:172] (0xc0007daa50) (0xc0007ce140) Stream removed, broadcasting: 3\nI0212 21:39:56.398952    2846 log.go:172] (0xc0007daa50) (0xc0007dc000) Stream removed, broadcasting: 5\n"
Feb 12 21:39:56.406: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 12 21:39:56.406: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 12 21:40:06.447: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:40:06.447: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:06.447: INFO: Waiting for Pod statefulset-3447/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:16.460: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:40:16.460: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:16.460: INFO: Waiting for Pod statefulset-3447/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:26.460: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:40:26.460: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:36.462: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
Feb 12 21:40:36.462: INFO: Waiting for Pod statefulset-3447/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 12 21:40:46.460: INFO: Waiting for StatefulSet statefulset-3447/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 12 21:40:56.464: INFO: Deleting all statefulset in ns statefulset-3447
Feb 12 21:40:56.469: INFO: Scaling statefulset ss2 to 0
Feb 12 21:41:36.493: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 21:41:36.498: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:41:36.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3447" for this suite.

• [SLOW TEST:242.182 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":277,"completed":184,"skipped":3053,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:41:36.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1618
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 12 21:41:36.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8869'
Feb 12 21:41:36.772: INFO: stderr: ""
Feb 12 21:41:36.772: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 12 21:41:46.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8869 -o json'
Feb 12 21:41:46.972: INFO: stderr: ""
Feb 12 21:41:46.972: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-12T21:41:36Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-8869\",\n        \"resourceVersion\": \"8027207\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-8869/pods/e2e-test-httpd-pod\",\n        \"uid\": \"9deb7131-9f22-430b-9b26-8b6a579a8416\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-jn5lx\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-jn5lx\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-jn5lx\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T21:41:36Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T21:41:43Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T21:41:43Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-12T21:41:36Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://1873dd2232c3c50e3c231324d3f515824858542d61e0a30d150a988f48d99251\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-12T21:41:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-12T21:41:36Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 12 21:41:46.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8869'
Feb 12 21:41:47.614: INFO: stderr: ""
Feb 12 21:41:47.614: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1623
Feb 12 21:41:47.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8869'
Feb 12 21:41:54.065: INFO: stderr: ""
Feb 12 21:41:54.065: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:41:54.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8869" for this suite.

• [SLOW TEST:17.485 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1614
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":277,"completed":185,"skipped":3073,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:41:54.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:42:05.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-586" for this suite.

• [SLOW TEST:11.261 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":277,"completed":186,"skipped":3074,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:42:05.337: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:42:05.441: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:42:06.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3951" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":277,"completed":187,"skipped":3078,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:42:06.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Feb 12 21:42:15.165: INFO: Successfully updated pod "labelsupdate59b4fcd8-166a-4822-9a3d-16400ff7d044"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:42:17.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8736" for this suite.

• [SLOW TEST:10.832 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":277,"completed":188,"skipped":3081,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:42:17.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-278
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Feb 12 21:42:17.446: INFO: Found 0 stateful pods, waiting for 3
Feb 12 21:42:27.692: INFO: Found 2 stateful pods, waiting for 3
Feb 12 21:42:37.453: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:42:37.453: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:42:37.453: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 21:42:47.452: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:42:47.452: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:42:47.452: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 12 21:42:47.477: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 12 21:42:57.558: INFO: Updating stateful set ss2
Feb 12 21:42:57.600: INFO: Waiting for Pod statefulset-278/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:43:07.612: INFO: Waiting for Pod statefulset-278/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 12 21:43:17.990: INFO: Found 2 stateful pods, waiting for 3
Feb 12 21:43:28.022: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:43:28.022: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:43:28.022: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 12 21:43:37.997: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:43:37.997: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 12 21:43:37.997: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 12 21:43:38.037: INFO: Updating stateful set ss2
Feb 12 21:43:38.082: INFO: Waiting for Pod statefulset-278/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:43:48.099: INFO: Waiting for Pod statefulset-278/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:43:58.673: INFO: Updating stateful set ss2
Feb 12 21:43:58.728: INFO: Waiting for StatefulSet statefulset-278/ss2 to complete update
Feb 12 21:43:58.728: INFO: Waiting for Pod statefulset-278/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:44:08.736: INFO: Waiting for StatefulSet statefulset-278/ss2 to complete update
Feb 12 21:44:08.736: INFO: Waiting for Pod statefulset-278/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 12 21:44:18.756: INFO: Waiting for StatefulSet statefulset-278/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 12 21:44:28.742: INFO: Deleting all statefulset in ns statefulset-278
Feb 12 21:44:28.749: INFO: Scaling statefulset ss2 to 0
Feb 12 21:44:48.792: INFO: Waiting for statefulset status.replicas updated to 0
Feb 12 21:44:48.798: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:44:48.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-278" for this suite.

• [SLOW TEST:151.532 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":277,"completed":189,"skipped":3081,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:44:48.863: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:44:48.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892" in namespace "downward-api-9396" to be "Succeeded or Failed"
Feb 12 21:44:48.939: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892": Phase="Pending", Reason="", readiness=false. Elapsed: 9.20307ms
Feb 12 21:44:50.955: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024577966s
Feb 12 21:44:52.959: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029276915s
Feb 12 21:44:54.999: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069179712s
Feb 12 21:44:57.008: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077903384s
STEP: Saw pod success
Feb 12 21:44:57.008: INFO: Pod "downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892" satisfied condition "Succeeded or Failed"
Feb 12 21:44:57.015: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892 container client-container: 
STEP: delete the pod
Feb 12 21:44:57.096: INFO: Waiting for pod downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892 to disappear
Feb 12 21:44:57.099: INFO: Pod downwardapi-volume-9b908f98-a20f-405d-8d63-f920d6395892 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:44:57.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9396" for this suite.

• [SLOW TEST:8.245 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":190,"skipped":3113,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:44:57.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 12 21:44:57.259: INFO: Waiting up to 5m0s for pod "pod-60550692-1478-4b7f-b695-76ad9434efcf" in namespace "emptydir-3602" to be "Succeeded or Failed"
Feb 12 21:44:57.271: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.840017ms
Feb 12 21:44:59.277: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018417646s
Feb 12 21:45:01.286: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026886214s
Feb 12 21:45:03.291: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03240587s
Feb 12 21:45:05.297: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038549516s
Feb 12 21:45:07.305: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04640933s
STEP: Saw pod success
Feb 12 21:45:07.305: INFO: Pod "pod-60550692-1478-4b7f-b695-76ad9434efcf" satisfied condition "Succeeded or Failed"
Feb 12 21:45:07.310: INFO: Trying to get logs from node jerma-node pod pod-60550692-1478-4b7f-b695-76ad9434efcf container test-container: 
STEP: delete the pod
Feb 12 21:45:07.712: INFO: Waiting for pod pod-60550692-1478-4b7f-b695-76ad9434efcf to disappear
Feb 12 21:45:07.723: INFO: Pod pod-60550692-1478-4b7f-b695-76ad9434efcf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:07.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3602" for this suite.

• [SLOW TEST:10.631 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":191,"skipped":3154,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:07.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 12 21:45:07.892: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 21:45:07.920: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 21:45:07.923: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 12 21:45:07.933: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.933: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:45:07.933: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 12 21:45:07.933: INFO: 	Container weave ready: true, restart count 1
Feb 12 21:45:07.933: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:45:07.933: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 12 21:45:07.950: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container kube-scheduler ready: true, restart count 9
Feb 12 21:45:07.950: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 21:45:07.950: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container etcd ready: true, restart count 1
Feb 12 21:45:07.950: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:45:07.950: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:45:07.950: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 21:45:07.950: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:45:07.950: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 12 21:45:07.950: INFO: 	Container weave ready: true, restart count 0
Feb 12 21:45:07.950: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f2c5ce10b7919a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:09.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-274" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":277,"completed":192,"skipped":3185,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:09.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Feb 12 21:45:09.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038" in namespace "projected-8196" to be "Succeeded or Failed"
Feb 12 21:45:09.373: INFO: Pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038": Phase="Pending", Reason="", readiness=false. Elapsed: 109.43872ms
Feb 12 21:45:11.385: INFO: Pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121223617s
Feb 12 21:45:13.431: INFO: Pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168120649s
Feb 12 21:45:15.438: INFO: Pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.174503659s
STEP: Saw pod success
Feb 12 21:45:15.438: INFO: Pod "downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038" satisfied condition "Succeeded or Failed"
Feb 12 21:45:15.441: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038 container client-container: 
STEP: delete the pod
Feb 12 21:45:15.483: INFO: Waiting for pod downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038 to disappear
Feb 12 21:45:15.489: INFO: Pod downwardapi-volume-d5c612fc-c105-488e-9a80-9cd320fb9038 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:15.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8196" for this suite.

• [SLOW TEST:6.497 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":277,"completed":193,"skipped":3191,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:15.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Feb 12 21:45:23.668: INFO: Pod pod-hostip-88694581-1ae7-46de-b4c4-0ec3bc841a6e has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:23.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8646" for this suite.

• [SLOW TEST:8.163 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":277,"completed":194,"skipped":3222,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:23.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Feb 12 21:45:30.854: INFO: Successfully updated pod "annotationupdateba5d6d7b-55f5-4b0a-a6db-2c4ddd94be45"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:34.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7912" for this suite.

• [SLOW TEST:11.271 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":277,"completed":195,"skipped":3230,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:34.952: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-70c8a366-5de4-423d-a2ec-ecaff582d849
STEP: Creating a pod to test consume secrets
Feb 12 21:45:35.122: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3" in namespace "projected-2926" to be "Succeeded or Failed"
Feb 12 21:45:35.144: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.589766ms
Feb 12 21:45:37.153: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030532017s
Feb 12 21:45:39.160: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038199842s
Feb 12 21:45:41.167: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044675855s
Feb 12 21:45:43.172: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049538216s
STEP: Saw pod success
Feb 12 21:45:43.172: INFO: Pod "pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3" satisfied condition "Succeeded or Failed"
Feb 12 21:45:43.175: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 21:45:43.544: INFO: Waiting for pod pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3 to disappear
Feb 12 21:45:43.582: INFO: Pod pod-projected-secrets-abbd449d-50dc-4acf-96e1-c3f7a113ddd3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:43.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2926" for this suite.

• [SLOW TEST:8.647 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":196,"skipped":3234,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:43.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 12 21:45:43.790: INFO: Waiting up to 5m0s for pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f" in namespace "emptydir-6731" to be "Succeeded or Failed"
Feb 12 21:45:43.811: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.227649ms
Feb 12 21:45:45.819: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028711729s
Feb 12 21:45:47.834: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043451436s
Feb 12 21:45:49.844: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053568738s
Feb 12 21:45:51.848: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057864349s
Feb 12 21:45:53.862: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.071551072s
STEP: Saw pod success
Feb 12 21:45:53.862: INFO: Pod "pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f" satisfied condition "Succeeded or Failed"
Feb 12 21:45:53.892: INFO: Trying to get logs from node jerma-node pod pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f container test-container: 
STEP: delete the pod
Feb 12 21:45:54.017: INFO: Waiting for pod pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f to disappear
Feb 12 21:45:54.024: INFO: Pod pod-e6b44eaa-a0fb-42c6-a375-e951c4ffe93f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:45:54.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6731" for this suite.

• [SLOW TEST:10.434 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":197,"skipped":3236,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:45:54.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:46:42.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2467" for this suite.

• [SLOW TEST:48.105 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":277,"completed":198,"skipped":3361,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:46:42.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-223
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 21:46:42.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 12 21:46:42.448: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:46:44.613: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:46:46.453: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:46:49.506: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:46:50.842: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 21:46:52.453: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:46:54.454: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:46:56.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:46:58.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:47:00.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:47:02.455: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:47:04.456: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 21:47:06.456: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 12 21:47:06.467: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 12 21:47:08.475: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 12 21:47:10.478: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 12 21:47:12.473: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 12 21:47:20.563: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-223 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 21:47:20.563: INFO: >>> kubeConfig: /root/.kube/config
I0212 21:47:20.634958       9 log.go:172] (0xc0029093f0) (0xc0004461e0) Create stream
I0212 21:47:20.635042       9 log.go:172] (0xc0029093f0) (0xc0004461e0) Stream added, broadcasting: 1
I0212 21:47:20.641428       9 log.go:172] (0xc0029093f0) Reply frame received for 1
I0212 21:47:20.641462       9 log.go:172] (0xc0029093f0) (0xc001ddc500) Create stream
I0212 21:47:20.641471       9 log.go:172] (0xc0029093f0) (0xc001ddc500) Stream added, broadcasting: 3
I0212 21:47:20.643113       9 log.go:172] (0xc0029093f0) Reply frame received for 3
I0212 21:47:20.643136       9 log.go:172] (0xc0029093f0) (0xc001ddc5a0) Create stream
I0212 21:47:20.643146       9 log.go:172] (0xc0029093f0) (0xc001ddc5a0) Stream added, broadcasting: 5
I0212 21:47:20.645016       9 log.go:172] (0xc0029093f0) Reply frame received for 5
I0212 21:47:20.744591       9 log.go:172] (0xc0029093f0) Data frame received for 3
I0212 21:47:20.744623       9 log.go:172] (0xc001ddc500) (3) Data frame handling
I0212 21:47:20.744649       9 log.go:172] (0xc001ddc500) (3) Data frame sent
I0212 21:47:20.825241       9 log.go:172] (0xc0029093f0) Data frame received for 1
I0212 21:47:20.825263       9 log.go:172] (0xc0004461e0) (1) Data frame handling
I0212 21:47:20.825272       9 log.go:172] (0xc0004461e0) (1) Data frame sent
I0212 21:47:20.825454       9 log.go:172] (0xc0029093f0) (0xc0004461e0) Stream removed, broadcasting: 1
I0212 21:47:20.825531       9 log.go:172] (0xc0029093f0) (0xc001ddc500) Stream removed, broadcasting: 3
I0212 21:47:20.825679       9 log.go:172] (0xc0029093f0) (0xc001ddc5a0) Stream removed, broadcasting: 5
I0212 21:47:20.825696       9 log.go:172] (0xc0029093f0) (0xc0004461e0) Stream removed, broadcasting: 1
I0212 21:47:20.825710       9 log.go:172] (0xc0029093f0) (0xc001ddc500) Stream removed, broadcasting: 3
I0212 21:47:20.825723       9 log.go:172] (0xc0029093f0) (0xc001ddc5a0) Stream removed, broadcasting: 5
I0212 21:47:20.825882       9 log.go:172] (0xc0029093f0) Go away received
Feb 12 21:47:20.825: INFO: Waiting for responses: map[]
Feb 12 21:47:20.828: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-223 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 21:47:20.828: INFO: >>> kubeConfig: /root/.kube/config
I0212 21:47:20.894261       9 log.go:172] (0xc002c5c9a0) (0xc00193bb80) Create stream
I0212 21:47:20.894332       9 log.go:172] (0xc002c5c9a0) (0xc00193bb80) Stream added, broadcasting: 1
I0212 21:47:20.898944       9 log.go:172] (0xc002c5c9a0) Reply frame received for 1
I0212 21:47:20.898985       9 log.go:172] (0xc002c5c9a0) (0xc001ddc640) Create stream
I0212 21:47:20.899006       9 log.go:172] (0xc002c5c9a0) (0xc001ddc640) Stream added, broadcasting: 3
I0212 21:47:20.900531       9 log.go:172] (0xc002c5c9a0) Reply frame received for 3
I0212 21:47:20.900548       9 log.go:172] (0xc002c5c9a0) (0xc00114b180) Create stream
I0212 21:47:20.900555       9 log.go:172] (0xc002c5c9a0) (0xc00114b180) Stream added, broadcasting: 5
I0212 21:47:20.901572       9 log.go:172] (0xc002c5c9a0) Reply frame received for 5
I0212 21:47:20.995398       9 log.go:172] (0xc002c5c9a0) Data frame received for 3
I0212 21:47:20.995457       9 log.go:172] (0xc001ddc640) (3) Data frame handling
I0212 21:47:20.995472       9 log.go:172] (0xc001ddc640) (3) Data frame sent
I0212 21:47:21.073109       9 log.go:172] (0xc002c5c9a0) Data frame received for 1
I0212 21:47:21.073206       9 log.go:172] (0xc002c5c9a0) (0xc001ddc640) Stream removed, broadcasting: 3
I0212 21:47:21.073230       9 log.go:172] (0xc00193bb80) (1) Data frame handling
I0212 21:47:21.073244       9 log.go:172] (0xc00193bb80) (1) Data frame sent
I0212 21:47:21.073254       9 log.go:172] (0xc002c5c9a0) (0xc00193bb80) Stream removed, broadcasting: 1
I0212 21:47:21.073498       9 log.go:172] (0xc002c5c9a0) (0xc00114b180) Stream removed, broadcasting: 5
I0212 21:47:21.073524       9 log.go:172] (0xc002c5c9a0) (0xc00193bb80) Stream removed, broadcasting: 1
I0212 21:47:21.073530       9 log.go:172] (0xc002c5c9a0) (0xc001ddc640) Stream removed, broadcasting: 3
I0212 21:47:21.073545       9 log.go:172] (0xc002c5c9a0) (0xc00114b180) Stream removed, broadcasting: 5
Feb 12 21:47:21.073: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:47:21.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0212 21:47:21.074499       9 log.go:172] (0xc002c5c9a0) Go away received
STEP: Destroying namespace "pod-network-test-223" for this suite.

• [SLOW TEST:38.939 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":277,"completed":199,"skipped":3362,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:47:21.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-61639f47-5123-4ec9-aee8-b082883c613c
STEP: Creating a pod to test consume configMaps
Feb 12 21:47:21.359: INFO: Waiting up to 5m0s for pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d" in namespace "configmap-2556" to be "Succeeded or Failed"
Feb 12 21:47:21.379: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.375475ms
Feb 12 21:47:23.632: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272186508s
Feb 12 21:47:25.639: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.279438866s
Feb 12 21:47:27.647: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.287250965s
Feb 12 21:47:30.010: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.651038082s
Feb 12 21:47:32.021: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.661986132s
Feb 12 21:47:34.029: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.669293994s
STEP: Saw pod success
Feb 12 21:47:34.029: INFO: Pod "pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d" satisfied condition "Succeeded or Failed"
Feb 12 21:47:34.034: INFO: Trying to get logs from node jerma-node pod pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d container configmap-volume-test: 
STEP: delete the pod
Feb 12 21:47:34.097: INFO: Waiting for pod pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d to disappear
Feb 12 21:47:34.101: INFO: Pod pod-configmaps-feabac36-cd0e-45ab-8f26-e51c4ceba97d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:47:34.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2556" for this suite.

• [SLOW TEST:13.025 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":277,"completed":200,"skipped":3376,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:47:34.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-30325740-19da-47b6-9945-7ff102041532
STEP: Creating a pod to test consume secrets
Feb 12 21:47:34.239: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73" in namespace "projected-6466" to be "Succeeded or Failed"
Feb 12 21:47:34.242: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 3.330808ms
Feb 12 21:47:36.250: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011308139s
Feb 12 21:47:38.260: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021934219s
Feb 12 21:47:40.267: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028377341s
Feb 12 21:47:42.273: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034673007s
STEP: Saw pod success
Feb 12 21:47:42.273: INFO: Pod "pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73" satisfied condition "Succeeded or Failed"
Feb 12 21:47:42.276: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 21:47:42.330: INFO: Waiting for pod pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73 to disappear
Feb 12 21:47:42.337: INFO: Pod pod-projected-secrets-3b9b8fb6-fa16-4905-beb2-04832cd3fe73 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:47:42.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6466" for this suite.

• [SLOW TEST:8.239 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":201,"skipped":3406,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:47:42.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-e8ff06b8-fe24-4c55-ad80-7190b677982f
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:47:52.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9084" for this suite.

• [SLOW TEST:10.191 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":202,"skipped":3419,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:47:52.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Feb 12 21:47:52.645: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5704" to be "Succeeded or Failed"
Feb 12 21:47:52.655: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110053ms
Feb 12 21:47:54.660: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015359715s
Feb 12 21:47:56.666: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020825329s
Feb 12 21:47:58.672: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.027203149s
Feb 12 21:48:00.678: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.032988271s
Feb 12 21:48:02.686: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.041392193s
Feb 12 21:48:04.693: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.047825196s
STEP: Saw pod success
Feb 12 21:48:04.693: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Feb 12 21:48:04.697: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 12 21:48:04.789: INFO: Waiting for pod pod-host-path-test to disappear
Feb 12 21:48:04.794: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:48:04.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-5704" for this suite.

• [SLOW TEST:12.263 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":203,"skipped":3423,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:48:04.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-78
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[]
Feb 12 21:48:05.005: INFO: Get endpoints failed (10.129874ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 12 21:48:06.009: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[] (1.014133399s elapsed)
STEP: Creating pod pod1 in namespace services-78
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod1:[100]]
Feb 12 21:48:11.262: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.245840564s elapsed, will retry)
Feb 12 21:48:14.852: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod1:[100]] (8.835948601s elapsed)
STEP: Creating pod pod2 in namespace services-78
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 12 21:48:19.585: INFO: Unexpected endpoints: found map[19b83c3c-ea84-4f16-834b-553e61dc86c9:[100]], expected map[pod1:[100] pod2:[101]] (4.70557369s elapsed, will retry)
Feb 12 21:48:21.682: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod1:[100] pod2:[101]] (6.80307176s elapsed)
STEP: Deleting pod pod1 in namespace services-78
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[pod2:[101]]
Feb 12 21:48:21.723: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[pod2:[101]] (35.712527ms elapsed)
STEP: Deleting pod pod2 in namespace services-78
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-78 to expose endpoints map[]
Feb 12 21:48:22.812: INFO: successfully validated that service multi-endpoint-test in namespace services-78 exposes endpoints map[] (1.07760233s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:48:23.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-78" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696

• [SLOW TEST:18.219 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":277,"completed":204,"skipped":3433,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:48:23.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-78eac756-c318-49d9-b082-57eba5b1b320
STEP: Creating a pod to test consume configMaps
Feb 12 21:48:23.355: INFO: Waiting up to 5m0s for pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261" in namespace "configmap-7445" to be "Succeeded or Failed"
Feb 12 21:48:23.382: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Pending", Reason="", readiness=false. Elapsed: 27.519818ms
Feb 12 21:48:25.391: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036547898s
Feb 12 21:48:28.550: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Pending", Reason="", readiness=false. Elapsed: 5.195561839s
Feb 12 21:48:30.596: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Pending", Reason="", readiness=false. Elapsed: 7.241331681s
Feb 12 21:48:32.602: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Pending", Reason="", readiness=false. Elapsed: 9.247292249s
Feb 12 21:48:34.612: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.257740286s
STEP: Saw pod success
Feb 12 21:48:34.613: INFO: Pod "pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261" satisfied condition "Succeeded or Failed"
Feb 12 21:48:34.616: INFO: Trying to get logs from node jerma-node pod pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261 container configmap-volume-test: 
STEP: delete the pod
Feb 12 21:48:34.881: INFO: Waiting for pod pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261 to disappear
Feb 12 21:48:34.884: INFO: Pod pod-configmaps-930805b5-9d23-4c38-9af8-4edc58372261 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:48:34.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7445" for this suite.

• [SLOW TEST:11.869 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":205,"skipped":3442,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:48:34.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-1827e6b6-c2d1-421a-8203-d30fc3d66aef in namespace container-probe-1222
Feb 12 21:48:43.344: INFO: Started pod test-webserver-1827e6b6-c2d1-421a-8203-d30fc3d66aef in namespace container-probe-1222
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 21:48:43.349: INFO: Initial restart count of pod test-webserver-1827e6b6-c2d1-421a-8203-d30fc3d66aef is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:52:44.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1222" for this suite.

• [SLOW TEST:249.823 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":277,"completed":206,"skipped":3447,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:52:44.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:692
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:52:44.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8014" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:696
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":277,"completed":207,"skipped":3453,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:52:44.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:52:44.970: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:52:46.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-587" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":277,"completed":208,"skipped":3472,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:52:46.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Feb 12 21:52:46.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 12 21:52:47.107: INFO: stderr: ""
Feb 12 21:52:47.107: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:52:47.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-242" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":277,"completed":209,"skipped":3483,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:52:47.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 12 21:52:47.240: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 12 21:52:47.703: INFO: Waiting for terminating namespaces to be deleted...
Feb 12 21:52:47.708: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 12 21:52:47.732: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.732: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:52:47.732: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 12 21:52:47.732: INFO: 	Container weave ready: true, restart count 1
Feb 12 21:52:47.732: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:52:47.732: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 12 21:52:47.773: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:52:47.773: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container coredns ready: true, restart count 0
Feb 12 21:52:47.773: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 21:52:47.773: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 21:52:47.773: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container weave ready: true, restart count 0
Feb 12 21:52:47.773: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 21:52:47.773: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container kube-scheduler ready: true, restart count 9
Feb 12 21:52:47.773: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 21:52:47.773: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 12 21:52:47.773: INFO: 	Container etcd ready: true, restart count 1
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e2cd8b9f-0eee-424d-98a9-bcf452ad0bfb 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-e2cd8b9f-0eee-424d-98a9-bcf452ad0bfb off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e2cd8b9f-0eee-424d-98a9-bcf452ad0bfb
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:53:21.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3222" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:36.544 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":277,"completed":210,"skipped":3500,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:53:23.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:53:24.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3971'
Feb 12 21:53:26.637: INFO: stderr: ""
Feb 12 21:53:26.638: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 12 21:53:26.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3971'
Feb 12 21:53:27.177: INFO: stderr: ""
Feb 12 21:53:27.177: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 12 21:53:28.185: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:28.185: INFO: Found 0 / 1
Feb 12 21:53:29.185: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:29.185: INFO: Found 0 / 1
Feb 12 21:53:30.182: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:30.182: INFO: Found 0 / 1
Feb 12 21:53:31.181: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:31.181: INFO: Found 0 / 1
Feb 12 21:53:32.189: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:32.189: INFO: Found 0 / 1
Feb 12 21:53:33.183: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:33.183: INFO: Found 0 / 1
Feb 12 21:53:34.284: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:34.284: INFO: Found 0 / 1
Feb 12 21:53:35.181: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:35.181: INFO: Found 0 / 1
Feb 12 21:53:36.182: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:36.182: INFO: Found 0 / 1
Feb 12 21:53:37.183: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:37.183: INFO: Found 0 / 1
Feb 12 21:53:38.189: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:38.189: INFO: Found 1 / 1
Feb 12 21:53:38.189: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 12 21:53:38.195: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 12 21:53:38.195: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 12 21:53:38.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-8q6cl --namespace=kubectl-3971'
Feb 12 21:53:38.379: INFO: stderr: ""
Feb 12 21:53:38.379: INFO: stdout: "Name:         agnhost-master-8q6cl\nNamespace:    kubectl-3971\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Wed, 12 Feb 2020 21:53:26 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.4\nIPs:\n  IP:           10.44.0.4\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://c7bd47391fead447d8537e334e81734c30af6634122b12ae2c349874d38fb05b\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 12 Feb 2020 21:53:36 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ckw7v (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-ckw7v:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-ckw7v\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-3971/agnhost-master-8q6cl to jerma-node\n  Normal  Pulled     8s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    4s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    2s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 12 21:53:38.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3971'
Feb 12 21:53:38.591: INFO: stderr: ""
Feb 12 21:53:38.591: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3971\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  12s   replication-controller  Created pod: agnhost-master-8q6cl\n"
Feb 12 21:53:38.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3971'
Feb 12 21:53:38.781: INFO: stderr: ""
Feb 12 21:53:38.781: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3971\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.56.95\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.4:6379\nSession Affinity:  None\nEvents:            \n"
Feb 12 21:53:38.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 12 21:53:38.938: INFO: stderr: ""
Feb 12 21:53:38.938: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Wed, 12 Feb 2020 21:53:31 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Wed, 12 Feb 2020 21:49:54 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Wed, 12 Feb 2020 21:49:54 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Wed, 12 Feb 2020 21:49:54 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Wed, 12 Feb 2020 21:49:54 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (4 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         39d\n  kubectl-3971                agnhost-master-8q6cl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s\n  sched-pred-3222             pod1                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 12 21:53:38.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3971'
Feb 12 21:53:39.072: INFO: stderr: ""
Feb 12 21:53:39.072: INFO: stdout: "Name:         kubectl-3971\nLabels:       e2e-framework=kubectl\n              e2e-run=94c6791d-b4b4-49b4-91fb-bb25701d34ed\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:53:39.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3971" for this suite.

• [SLOW TEST:15.383 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1106
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":277,"completed":211,"skipped":3504,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:53:39.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:53:55.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8579" for this suite.

• [SLOW TEST:16.785 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":277,"completed":212,"skipped":3525,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:53:55.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 21:54:04.057: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8345" for this suite.

• [SLOW TEST:8.283 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":213,"skipped":3555,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:04.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-c8f7ee72-19c0-4471-b173-d886dd32e4b6
STEP: Creating a pod to test consume configMaps
Feb 12 21:54:04.232: INFO: Waiting up to 5m0s for pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1" in namespace "configmap-7254" to be "Succeeded or Failed"
Feb 12 21:54:04.240: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.27461ms
Feb 12 21:54:06.246: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01413336s
Feb 12 21:54:08.252: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02012623s
Feb 12 21:54:10.258: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025910643s
Feb 12 21:54:12.263: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030644658s
STEP: Saw pod success
Feb 12 21:54:12.263: INFO: Pod "pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1" satisfied condition "Succeeded or Failed"
Feb 12 21:54:12.265: INFO: Trying to get logs from node jerma-node pod pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1 container configmap-volume-test: 
STEP: delete the pod
Feb 12 21:54:12.302: INFO: Waiting for pod pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1 to disappear
Feb 12 21:54:12.314: INFO: Pod pod-configmaps-01699569-8478-4ccc-a170-c3a1a1128ea1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:12.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7254" for this suite.

• [SLOW TEST:8.200 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":214,"skipped":3578,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:12.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-cdd792f6-1e91-430a-84b2-e77458192586
STEP: Creating a pod to test consume configMaps
Feb 12 21:54:12.418: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b" in namespace "projected-8087" to be "Succeeded or Failed"
Feb 12 21:54:12.435: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b": Phase="Pending", Reason="", readiness=false. Elapsed: 17.160176ms
Feb 12 21:54:14.441: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023149596s
Feb 12 21:54:16.447: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028775028s
Feb 12 21:54:18.457: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038613126s
Feb 12 21:54:20.525: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106939035s
STEP: Saw pod success
Feb 12 21:54:20.525: INFO: Pod "pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b" satisfied condition "Succeeded or Failed"
Feb 12 21:54:20.537: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 21:54:20.812: INFO: Waiting for pod pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b to disappear
Feb 12 21:54:21.057: INFO: Pod pod-projected-configmaps-a3db07bf-839e-48f1-a905-b872c2ed527b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:21.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8087" for this suite.

• [SLOW TEST:8.731 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":277,"completed":215,"skipped":3625,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:21.083: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-4de217e9-6d34-448a-b89b-5b0f6f3919eb
STEP: Creating a pod to test consume secrets
Feb 12 21:54:21.211: INFO: Waiting up to 5m0s for pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7" in namespace "secrets-9549" to be "Succeeded or Failed"
Feb 12 21:54:21.227: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.56004ms
Feb 12 21:54:23.237: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026243757s
Feb 12 21:54:25.244: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033460871s
Feb 12 21:54:27.251: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04012975s
Feb 12 21:54:29.258: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047121884s
STEP: Saw pod success
Feb 12 21:54:29.258: INFO: Pod "pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7" satisfied condition "Succeeded or Failed"
Feb 12 21:54:29.263: INFO: Trying to get logs from node jerma-node pod pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7 container secret-volume-test: 
STEP: delete the pod
Feb 12 21:54:29.373: INFO: Waiting for pod pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7 to disappear
Feb 12 21:54:29.435: INFO: Pod pod-secrets-6f5e5085-d8af-4203-bfe0-974517ec94a7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:29.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9549" for this suite.

• [SLOW TEST:8.374 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":216,"skipped":3630,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:29.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-e4fe1089-8d00-42fb-bfbe-9c26c8fdef04
STEP: Creating a pod to test consume secrets
Feb 12 21:54:29.733: INFO: Waiting up to 5m0s for pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3" in namespace "secrets-9647" to be "Succeeded or Failed"
Feb 12 21:54:29.750: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3": Phase="Pending", Reason="", readiness=false. Elapsed: 17.231642ms
Feb 12 21:54:31.756: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023156837s
Feb 12 21:54:33.762: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029309086s
Feb 12 21:54:35.769: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3": Phase="Running", Reason="", readiness=true. Elapsed: 6.035886485s
Feb 12 21:54:37.800: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06674316s
STEP: Saw pod success
Feb 12 21:54:37.800: INFO: Pod "pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3" satisfied condition "Succeeded or Failed"
Feb 12 21:54:37.805: INFO: Trying to get logs from node jerma-node pod pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3 container secret-volume-test: 
STEP: delete the pod
Feb 12 21:54:38.097: INFO: Waiting for pod pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3 to disappear
Feb 12 21:54:38.115: INFO: Pod pod-secrets-b8e311ae-e85d-4f24-8fe3-ecf1b8d615b3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:38.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9647" for this suite.

• [SLOW TEST:8.669 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":217,"skipped":3652,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:38.130: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:42.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6933" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":277,"completed":218,"skipped":3724,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:42.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-15a54ace-f50c-4052-a5ca-72c27d0c8f6d
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:43.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4890" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":277,"completed":219,"skipped":3726,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:43.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0212 21:54:44.197699       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 21:54:44.197: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:44.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1945" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":277,"completed":220,"skipped":3747,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:44.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 12 21:54:54.642: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:54:54.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7690" for this suite.

• [SLOW TEST:10.609 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":277,"completed":221,"skipped":3781,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:54:54.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-d79a97a8-6c16-4d9b-ac33-1e674d9abff8
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-d79a97a8-6c16-4d9b-ac33-1e674d9abff8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:56:10.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3248" for this suite.

• [SLOW TEST:75.945 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":222,"skipped":3801,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:56:10.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-e8095758-52a9-4799-b300-896c0a376457
STEP: Creating a pod to test consume secrets
Feb 12 21:56:11.014: INFO: Waiting up to 5m0s for pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f" in namespace "secrets-2296" to be "Succeeded or Failed"
Feb 12 21:56:11.023: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.176081ms
Feb 12 21:56:13.039: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024080662s
Feb 12 21:56:15.046: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031466889s
Feb 12 21:56:17.054: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039818426s
Feb 12 21:56:19.067: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052643995s
Feb 12 21:56:21.082: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06717083s
STEP: Saw pod success
Feb 12 21:56:21.082: INFO: Pod "pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f" satisfied condition "Succeeded or Failed"
Feb 12 21:56:21.087: INFO: Trying to get logs from node jerma-node pod pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f container secret-env-test: 
STEP: delete the pod
Feb 12 21:56:21.180: INFO: Waiting for pod pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f to disappear
Feb 12 21:56:21.188: INFO: Pod pod-secrets-d6e1f05c-17cd-42a3-ac85-0e3cd17f076f no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:56:21.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2296" for this suite.

• [SLOW TEST:10.435 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":277,"completed":223,"skipped":3808,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:56:21.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 12 21:56:21.340: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4283 /api/v1/namespaces/watch-4283/configmaps/e2e-watch-test-watch-closed 273d4db5-8d8c-4c41-9792-44dc5de813be 8030626 0 2020-02-12 21:56:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 21:56:21.340: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4283 /api/v1/namespaces/watch-4283/configmaps/e2e-watch-test-watch-closed 273d4db5-8d8c-4c41-9792-44dc5de813be 8030627 0 2020-02-12 21:56:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 12 21:56:21.369: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4283 /api/v1/namespaces/watch-4283/configmaps/e2e-watch-test-watch-closed 273d4db5-8d8c-4c41-9792-44dc5de813be 8030629 0 2020-02-12 21:56:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 21:56:21.370: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4283 /api/v1/namespaces/watch-4283/configmaps/e2e-watch-test-watch-closed 273d4db5-8d8c-4c41-9792-44dc5de813be 8030630 0 2020-02-12 21:56:21 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:56:21.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4283" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":277,"completed":224,"skipped":3825,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:56:21.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:56:38.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7362" for this suite.

• [SLOW TEST:17.350 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":277,"completed":225,"skipped":3826,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:56:38.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-a80ddebf-0f8d-444f-84f8-b87940db4c3a in namespace container-probe-6969
Feb 12 21:56:48.967: INFO: Started pod liveness-a80ddebf-0f8d-444f-84f8-b87940db4c3a in namespace container-probe-6969
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 21:56:48.970: INFO: Initial restart count of pod liveness-a80ddebf-0f8d-444f-84f8-b87940db4c3a is 0
Feb 12 21:57:13.067: INFO: Restart count of pod container-probe-6969/liveness-a80ddebf-0f8d-444f-84f8-b87940db4c3a is now 1 (24.096582604s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:57:13.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6969" for this suite.

• [SLOW TEST:34.360 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":277,"completed":226,"skipped":3827,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:57:13.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 12 21:57:14.783: INFO: Pod name wrapped-volume-race-1808d7eb-4ebb-45e7-b592-660b7d0176ae: Found 0 pods out of 5
Feb 12 21:57:19.812: INFO: Pod name wrapped-volume-race-1808d7eb-4ebb-45e7-b592-660b7d0176ae: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1808d7eb-4ebb-45e7-b592-660b7d0176ae in namespace emptydir-wrapper-2461, will wait for the garbage collector to delete the pods
Feb 12 21:57:43.916: INFO: Deleting ReplicationController wrapped-volume-race-1808d7eb-4ebb-45e7-b592-660b7d0176ae took: 16.29196ms
Feb 12 21:57:44.417: INFO: Terminating ReplicationController wrapped-volume-race-1808d7eb-4ebb-45e7-b592-660b7d0176ae pods took: 500.698706ms
STEP: Creating RC which spawns configmap-volume pods
Feb 12 21:57:57.869: INFO: Pod name wrapped-volume-race-e7fa2f4a-78ff-4c62-9e03-3edf5b6fe143: Found 0 pods out of 5
Feb 12 21:58:02.905: INFO: Pod name wrapped-volume-race-e7fa2f4a-78ff-4c62-9e03-3edf5b6fe143: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-e7fa2f4a-78ff-4c62-9e03-3edf5b6fe143 in namespace emptydir-wrapper-2461, will wait for the garbage collector to delete the pods
Feb 12 21:58:35.066: INFO: Deleting ReplicationController wrapped-volume-race-e7fa2f4a-78ff-4c62-9e03-3edf5b6fe143 took: 55.121722ms
Feb 12 21:58:35.566: INFO: Terminating ReplicationController wrapped-volume-race-e7fa2f4a-78ff-4c62-9e03-3edf5b6fe143 pods took: 500.792338ms
STEP: Creating RC which spawns configmap-volume pods
Feb 12 21:58:52.509: INFO: Pod name wrapped-volume-race-ae9a733f-c5bb-4a49-b7ec-0a6fcfd80d02: Found 0 pods out of 5
Feb 12 21:58:57.520: INFO: Pod name wrapped-volume-race-ae9a733f-c5bb-4a49-b7ec-0a6fcfd80d02: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-ae9a733f-c5bb-4a49-b7ec-0a6fcfd80d02 in namespace emptydir-wrapper-2461, will wait for the garbage collector to delete the pods
Feb 12 21:59:25.679: INFO: Deleting ReplicationController wrapped-volume-race-ae9a733f-c5bb-4a49-b7ec-0a6fcfd80d02 took: 20.355373ms
Feb 12 21:59:26.079: INFO: Terminating ReplicationController wrapped-volume-race-ae9a733f-c5bb-4a49-b7ec-0a6fcfd80d02 pods took: 400.343667ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:59:44.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2461" for this suite.

• [SLOW TEST:151.702 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":277,"completed":227,"skipped":3828,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:59:44.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 21:59:44.890: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 21:59:45.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2243" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":277,"completed":228,"skipped":3850,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 21:59:45.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:281
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Feb 12 21:59:45.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3848'
Feb 12 21:59:46.365: INFO: stderr: ""
Feb 12 21:59:46.365: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 21:59:46.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3848'
Feb 12 21:59:46.687: INFO: stderr: ""
Feb 12 21:59:46.687: INFO: stdout: "update-demo-nautilus-57b66 update-demo-nautilus-gn2n2 "
Feb 12 21:59:46.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 21:59:46.765: INFO: stderr: ""
Feb 12 21:59:46.765: INFO: stdout: ""
Feb 12 21:59:46.765: INFO: update-demo-nautilus-57b66 is created but not running
Feb 12 21:59:51.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3848'
Feb 12 21:59:55.826: INFO: stderr: ""
Feb 12 21:59:55.826: INFO: stdout: "update-demo-nautilus-57b66 update-demo-nautilus-gn2n2 "
Feb 12 21:59:55.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 21:59:56.953: INFO: stderr: ""
Feb 12 21:59:56.953: INFO: stdout: ""
Feb 12 21:59:56.953: INFO: update-demo-nautilus-57b66 is created but not running
Feb 12 22:00:01.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3848'
Feb 12 22:00:02.261: INFO: stderr: ""
Feb 12 22:00:02.261: INFO: stdout: "update-demo-nautilus-57b66 update-demo-nautilus-gn2n2 "
Feb 12 22:00:02.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:02.413: INFO: stderr: ""
Feb 12 22:00:02.413: INFO: stdout: "true"
Feb 12 22:00:02.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:02.600: INFO: stderr: ""
Feb 12 22:00:02.600: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:00:02.601: INFO: validating pod update-demo-nautilus-57b66
Feb 12 22:00:02.617: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:00:02.617: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:00:02.617: INFO: update-demo-nautilus-57b66 is verified up and running
Feb 12 22:00:02.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn2n2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:02.694: INFO: stderr: ""
Feb 12 22:00:02.694: INFO: stdout: ""
Feb 12 22:00:02.694: INFO: update-demo-nautilus-gn2n2 is created but not running
Feb 12 22:00:07.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3848'
Feb 12 22:00:07.854: INFO: stderr: ""
Feb 12 22:00:07.854: INFO: stdout: "update-demo-nautilus-57b66 update-demo-nautilus-gn2n2 "
Feb 12 22:00:07.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:07.993: INFO: stderr: ""
Feb 12 22:00:07.993: INFO: stdout: "true"
Feb 12 22:00:07.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-57b66 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:08.112: INFO: stderr: ""
Feb 12 22:00:08.112: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:00:08.112: INFO: validating pod update-demo-nautilus-57b66
Feb 12 22:00:08.117: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:00:08.117: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:00:08.117: INFO: update-demo-nautilus-57b66 is verified up and running
Feb 12 22:00:08.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn2n2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:08.199: INFO: stderr: ""
Feb 12 22:00:08.199: INFO: stdout: "true"
Feb 12 22:00:08.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gn2n2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3848'
Feb 12 22:00:08.315: INFO: stderr: ""
Feb 12 22:00:08.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:00:08.315: INFO: validating pod update-demo-nautilus-gn2n2
Feb 12 22:00:08.337: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:00:08.337: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:00:08.337: INFO: update-demo-nautilus-gn2n2 is verified up and running
STEP: using delete to clean up resources
Feb 12 22:00:08.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3848'
Feb 12 22:00:08.449: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:00:08.449: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 12 22:00:08.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3848'
Feb 12 22:00:08.550: INFO: stderr: "No resources found in kubectl-3848 namespace.\n"
Feb 12 22:00:08.550: INFO: stdout: ""
Feb 12 22:00:08.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3848 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 22:00:08.620: INFO: stderr: ""
Feb 12 22:00:08.621: INFO: stdout: "update-demo-nautilus-57b66\nupdate-demo-nautilus-gn2n2\n"
Feb 12 22:00:09.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3848'
Feb 12 22:00:09.282: INFO: stderr: "No resources found in kubectl-3848 namespace.\n"
Feb 12 22:00:09.282: INFO: stdout: ""
Feb 12 22:00:09.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3848 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 22:00:09.368: INFO: stderr: ""
Feb 12 22:00:09.368: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:00:09.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3848" for this suite.

• [SLOW TEST:23.692 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":277,"completed":229,"skipped":3853,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:00:09.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:00:10.721: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 12 22:00:10.757: INFO: Number of nodes with available pods: 0
Feb 12 22:00:10.757: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:11.774: INFO: Number of nodes with available pods: 0
Feb 12 22:00:11.774: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:13.577: INFO: Number of nodes with available pods: 0
Feb 12 22:00:13.577: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:13.784: INFO: Number of nodes with available pods: 0
Feb 12 22:00:13.785: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:14.800: INFO: Number of nodes with available pods: 0
Feb 12 22:00:14.800: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:15.825: INFO: Number of nodes with available pods: 0
Feb 12 22:00:15.825: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:17.933: INFO: Number of nodes with available pods: 0
Feb 12 22:00:17.933: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:19.378: INFO: Number of nodes with available pods: 0
Feb 12 22:00:19.378: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:19.773: INFO: Number of nodes with available pods: 0
Feb 12 22:00:19.773: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:00:20.769: INFO: Number of nodes with available pods: 1
Feb 12 22:00:20.769: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:21.770: INFO: Number of nodes with available pods: 1
Feb 12 22:00:21.771: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:22.779: INFO: Number of nodes with available pods: 2
Feb 12 22:00:22.779: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 12 22:00:22.869: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:22.869: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:23.913: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:23.914: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:24.911: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:24.911: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:25.911: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:25.911: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:26.918: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:26.918: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:26.918: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:27.907: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:27.907: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:27.907: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:28.906: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:28.906: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:28.906: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:29.912: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:29.912: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:29.912: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:30.910: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:30.910: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:30.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:31.910: INFO: Wrong image for pod: daemon-set-bqjgz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:31.910: INFO: Pod daemon-set-bqjgz is not available
Feb 12 22:00:31.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:32.912: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:32.913: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:33.910: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:33.911: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:34.913: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:34.913: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:35.910: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:35.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:36.907: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:36.907: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:37.910: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:37.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:38.908: INFO: Pod daemon-set-c4hfw is not available
Feb 12 22:00:38.908: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:40.365: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:40.916: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:41.909: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:42.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:43.911: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:43.911: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:44.907: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:44.907: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:45.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:45.910: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:46.909: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:46.909: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:47.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:47.910: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:48.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:48.911: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:49.909: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:49.909: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:50.910: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:50.910: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:51.911: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:51.911: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:52.915: INFO: Wrong image for pod: daemon-set-xhmdn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 12 22:00:52.915: INFO: Pod daemon-set-xhmdn is not available
Feb 12 22:00:54.768: INFO: Pod daemon-set-7b9c9 is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 12 22:00:55.184: INFO: Number of nodes with available pods: 1
Feb 12 22:00:55.184: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:56.195: INFO: Number of nodes with available pods: 1
Feb 12 22:00:56.195: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:57.194: INFO: Number of nodes with available pods: 1
Feb 12 22:00:57.195: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:58.525: INFO: Number of nodes with available pods: 1
Feb 12 22:00:58.525: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:00:59.194: INFO: Number of nodes with available pods: 1
Feb 12 22:00:59.194: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:00.196: INFO: Number of nodes with available pods: 1
Feb 12 22:01:00.196: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:01.196: INFO: Number of nodes with available pods: 1
Feb 12 22:01:01.196: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:02.192: INFO: Number of nodes with available pods: 2
Feb 12 22:01:02.192: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2405, will wait for the garbage collector to delete the pods
Feb 12 22:01:02.266: INFO: Deleting DaemonSet.extensions daemon-set took: 5.64468ms
Feb 12 22:01:02.666: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.320716ms
Feb 12 22:01:12.373: INFO: Number of nodes with available pods: 0
Feb 12 22:01:12.373: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 22:01:12.377: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2405/daemonsets","resourceVersion":"8032289"},"items":null}

Feb 12 22:01:12.379: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2405/pods","resourceVersion":"8032289"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:01:12.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2405" for this suite.

• [SLOW TEST:63.070 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":277,"completed":230,"skipped":3898,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:01:12.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-b0b1eb45-0905-462d-8ebf-17893b4b9968
STEP: Creating a pod to test consume secrets
Feb 12 22:01:12.603: INFO: Waiting up to 5m0s for pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b" in namespace "secrets-5423" to be "Succeeded or Failed"
Feb 12 22:01:12.643: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.747317ms
Feb 12 22:01:14.649: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045387619s
Feb 12 22:01:16.663: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059775754s
Feb 12 22:01:18.670: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066823315s
Feb 12 22:01:20.675: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072171254s
STEP: Saw pod success
Feb 12 22:01:20.676: INFO: Pod "pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b" satisfied condition "Succeeded or Failed"
Feb 12 22:01:20.679: INFO: Trying to get logs from node jerma-node pod pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b container secret-volume-test: 
STEP: delete the pod
Feb 12 22:01:20.839: INFO: Waiting for pod pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b to disappear
Feb 12 22:01:20.845: INFO: Pod pod-secrets-3286257e-df2d-4be4-ad39-74fe580eec2b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:01:20.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5423" for this suite.

• [SLOW TEST:8.406 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":277,"completed":231,"skipped":3898,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:01:20.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:01:21.087: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 12 22:01:21.106: INFO: Number of nodes with available pods: 0
Feb 12 22:01:21.106: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 12 22:01:21.134: INFO: Number of nodes with available pods: 0
Feb 12 22:01:21.134: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:22.375: INFO: Number of nodes with available pods: 0
Feb 12 22:01:22.375: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:23.147: INFO: Number of nodes with available pods: 0
Feb 12 22:01:23.147: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:24.141: INFO: Number of nodes with available pods: 0
Feb 12 22:01:24.141: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:26.365: INFO: Number of nodes with available pods: 0
Feb 12 22:01:26.365: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:27.444: INFO: Number of nodes with available pods: 0
Feb 12 22:01:27.444: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:28.180: INFO: Number of nodes with available pods: 0
Feb 12 22:01:28.180: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:29.143: INFO: Number of nodes with available pods: 0
Feb 12 22:01:29.143: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:30.142: INFO: Number of nodes with available pods: 1
Feb 12 22:01:30.142: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 12 22:01:30.222: INFO: Number of nodes with available pods: 1
Feb 12 22:01:30.222: INFO: Number of running nodes: 0, number of available pods: 1
Feb 12 22:01:31.229: INFO: Number of nodes with available pods: 0
Feb 12 22:01:31.229: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 12 22:01:31.262: INFO: Number of nodes with available pods: 0
Feb 12 22:01:31.262: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:32.268: INFO: Number of nodes with available pods: 0
Feb 12 22:01:32.268: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:33.267: INFO: Number of nodes with available pods: 0
Feb 12 22:01:33.267: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:34.270: INFO: Number of nodes with available pods: 0
Feb 12 22:01:34.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:35.267: INFO: Number of nodes with available pods: 0
Feb 12 22:01:35.267: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:36.270: INFO: Number of nodes with available pods: 0
Feb 12 22:01:36.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:37.269: INFO: Number of nodes with available pods: 0
Feb 12 22:01:37.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:38.269: INFO: Number of nodes with available pods: 0
Feb 12 22:01:38.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:39.273: INFO: Number of nodes with available pods: 0
Feb 12 22:01:39.273: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:40.270: INFO: Number of nodes with available pods: 0
Feb 12 22:01:40.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:41.275: INFO: Number of nodes with available pods: 0
Feb 12 22:01:41.275: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:42.290: INFO: Number of nodes with available pods: 0
Feb 12 22:01:42.290: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:43.268: INFO: Number of nodes with available pods: 0
Feb 12 22:01:43.268: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:44.266: INFO: Number of nodes with available pods: 0
Feb 12 22:01:44.266: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:45.269: INFO: Number of nodes with available pods: 0
Feb 12 22:01:45.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:46.687: INFO: Number of nodes with available pods: 0
Feb 12 22:01:46.687: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:47.465: INFO: Number of nodes with available pods: 0
Feb 12 22:01:47.465: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:48.270: INFO: Number of nodes with available pods: 0
Feb 12 22:01:48.270: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:49.270: INFO: Number of nodes with available pods: 0
Feb 12 22:01:49.271: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:50.272: INFO: Number of nodes with available pods: 0
Feb 12 22:01:50.272: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:51.269: INFO: Number of nodes with available pods: 0
Feb 12 22:01:51.269: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:52.267: INFO: Number of nodes with available pods: 0
Feb 12 22:01:52.267: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Feb 12 22:01:53.270: INFO: Number of nodes with available pods: 1
Feb 12 22:01:53.270: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8900, will wait for the garbage collector to delete the pods
Feb 12 22:01:53.396: INFO: Deleting DaemonSet.extensions daemon-set took: 58.921151ms
Feb 12 22:01:53.696: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.448644ms
Feb 12 22:02:03.100: INFO: Number of nodes with available pods: 0
Feb 12 22:02:03.100: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 22:02:03.103: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8900/daemonsets","resourceVersion":"8032505"},"items":null}

Feb 12 22:02:03.105: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8900/pods","resourceVersion":"8032505"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:02:03.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8900" for this suite.

• [SLOW TEST:42.321 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":277,"completed":232,"skipped":3903,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:02:03.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 12 22:02:03.269: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032513 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:03.269: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032513 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 12 22:02:13.288: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032554 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:13.288: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032554 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 12 22:02:23.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032578 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:23.302: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032578 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 12 22:02:33.313: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032598 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:33.313: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-a 9e9b7f6e-d8e3-4a7d-88a1-1088ad506b55 8032598 0 2020-02-12 22:02:03 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 12 22:02:43.326: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-b dc2fcbbd-93cb-4850-afa8-5dc6437ebc7a 8032624 0 2020-02-12 22:02:43 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:43.326: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-b dc2fcbbd-93cb-4850-afa8-5dc6437ebc7a 8032624 0 2020-02-12 22:02:43 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 12 22:02:53.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-b dc2fcbbd-93cb-4850-afa8-5dc6437ebc7a 8032649 0 2020-02-12 22:02:43 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 12 22:02:53.339: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-3322 /api/v1/namespaces/watch-3322/configmaps/e2e-watch-test-configmap-b dc2fcbbd-93cb-4850-afa8-5dc6437ebc7a 8032649 0 2020-02-12 22:02:43 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:03:03.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3322" for this suite.

• [SLOW TEST:60.174 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":277,"completed":233,"skipped":3951,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:03:03.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 12 22:03:23.623: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:23.623: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:23.708257       9 log.go:172] (0xc0029091e0) (0xc000114dc0) Create stream
I0212 22:03:23.708322       9 log.go:172] (0xc0029091e0) (0xc000114dc0) Stream added, broadcasting: 1
I0212 22:03:23.713293       9 log.go:172] (0xc0029091e0) Reply frame received for 1
I0212 22:03:23.713334       9 log.go:172] (0xc0029091e0) (0xc000310780) Create stream
I0212 22:03:23.713344       9 log.go:172] (0xc0029091e0) (0xc000310780) Stream added, broadcasting: 3
I0212 22:03:23.714893       9 log.go:172] (0xc0029091e0) Reply frame received for 3
I0212 22:03:23.714942       9 log.go:172] (0xc0029091e0) (0xc000310a00) Create stream
I0212 22:03:23.714958       9 log.go:172] (0xc0029091e0) (0xc000310a00) Stream added, broadcasting: 5
I0212 22:03:23.716298       9 log.go:172] (0xc0029091e0) Reply frame received for 5
I0212 22:03:23.809807       9 log.go:172] (0xc0029091e0) Data frame received for 3
I0212 22:03:23.810040       9 log.go:172] (0xc000310780) (3) Data frame handling
I0212 22:03:23.810064       9 log.go:172] (0xc000310780) (3) Data frame sent
I0212 22:03:23.956622       9 log.go:172] (0xc0029091e0) Data frame received for 1
I0212 22:03:23.956772       9 log.go:172] (0xc000114dc0) (1) Data frame handling
I0212 22:03:23.956806       9 log.go:172] (0xc000114dc0) (1) Data frame sent
I0212 22:03:23.957732       9 log.go:172] (0xc0029091e0) (0xc000114dc0) Stream removed, broadcasting: 1
I0212 22:03:23.958233       9 log.go:172] (0xc0029091e0) (0xc000310780) Stream removed, broadcasting: 3
I0212 22:03:23.958328       9 log.go:172] (0xc0029091e0) (0xc000310a00) Stream removed, broadcasting: 5
I0212 22:03:23.958403       9 log.go:172] (0xc0029091e0) Go away received
I0212 22:03:23.958436       9 log.go:172] (0xc0029091e0) (0xc000114dc0) Stream removed, broadcasting: 1
I0212 22:03:23.958452       9 log.go:172] (0xc0029091e0) (0xc000310780) Stream removed, broadcasting: 3
I0212 22:03:23.958465       9 log.go:172] (0xc0029091e0) (0xc000310a00) Stream removed, broadcasting: 5
Feb 12 22:03:23.958: INFO: Exec stderr: ""
Feb 12 22:03:23.958: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:23.958: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:24.026655       9 log.go:172] (0xc005f26370) (0xc000a7dae0) Create stream
I0212 22:03:24.026814       9 log.go:172] (0xc005f26370) (0xc000a7dae0) Stream added, broadcasting: 1
I0212 22:03:24.032060       9 log.go:172] (0xc005f26370) Reply frame received for 1
I0212 22:03:24.032097       9 log.go:172] (0xc005f26370) (0xc001be0000) Create stream
I0212 22:03:24.032110       9 log.go:172] (0xc005f26370) (0xc001be0000) Stream added, broadcasting: 3
I0212 22:03:24.033922       9 log.go:172] (0xc005f26370) Reply frame received for 3
I0212 22:03:24.033952       9 log.go:172] (0xc005f26370) (0xc000a7de00) Create stream
I0212 22:03:24.033961       9 log.go:172] (0xc005f26370) (0xc000a7de00) Stream added, broadcasting: 5
I0212 22:03:24.039378       9 log.go:172] (0xc005f26370) Reply frame received for 5
I0212 22:03:24.124631       9 log.go:172] (0xc005f26370) Data frame received for 3
I0212 22:03:24.124665       9 log.go:172] (0xc001be0000) (3) Data frame handling
I0212 22:03:24.124690       9 log.go:172] (0xc001be0000) (3) Data frame sent
I0212 22:03:24.195734       9 log.go:172] (0xc005f26370) (0xc001be0000) Stream removed, broadcasting: 3
I0212 22:03:24.195807       9 log.go:172] (0xc005f26370) Data frame received for 1
I0212 22:03:24.195839       9 log.go:172] (0xc000a7dae0) (1) Data frame handling
I0212 22:03:24.195854       9 log.go:172] (0xc000a7dae0) (1) Data frame sent
I0212 22:03:24.195865       9 log.go:172] (0xc005f26370) (0xc000a7dae0) Stream removed, broadcasting: 1
I0212 22:03:24.196029       9 log.go:172] (0xc005f26370) (0xc000a7de00) Stream removed, broadcasting: 5
I0212 22:03:24.196241       9 log.go:172] (0xc005f26370) Go away received
I0212 22:03:24.196272       9 log.go:172] (0xc005f26370) (0xc000a7dae0) Stream removed, broadcasting: 1
I0212 22:03:24.196285       9 log.go:172] (0xc005f26370) (0xc001be0000) Stream removed, broadcasting: 3
I0212 22:03:24.196296       9 log.go:172] (0xc005f26370) (0xc000a7de00) Stream removed, broadcasting: 5
Feb 12 22:03:24.196: INFO: Exec stderr: ""
Feb 12 22:03:24.196: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:24.196: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:24.235282       9 log.go:172] (0xc0027da630) (0xc0014adb80) Create stream
I0212 22:03:24.235328       9 log.go:172] (0xc0027da630) (0xc0014adb80) Stream added, broadcasting: 1
I0212 22:03:24.237563       9 log.go:172] (0xc0027da630) Reply frame received for 1
I0212 22:03:24.237586       9 log.go:172] (0xc0027da630) (0xc000311400) Create stream
I0212 22:03:24.237593       9 log.go:172] (0xc0027da630) (0xc000311400) Stream added, broadcasting: 3
I0212 22:03:24.238495       9 log.go:172] (0xc0027da630) Reply frame received for 3
I0212 22:03:24.238516       9 log.go:172] (0xc0027da630) (0xc000bcc1e0) Create stream
I0212 22:03:24.238523       9 log.go:172] (0xc0027da630) (0xc000bcc1e0) Stream added, broadcasting: 5
I0212 22:03:24.239568       9 log.go:172] (0xc0027da630) Reply frame received for 5
I0212 22:03:24.293649       9 log.go:172] (0xc0027da630) Data frame received for 3
I0212 22:03:24.293759       9 log.go:172] (0xc000311400) (3) Data frame handling
I0212 22:03:24.293781       9 log.go:172] (0xc000311400) (3) Data frame sent
I0212 22:03:24.355055       9 log.go:172] (0xc0027da630) (0xc000311400) Stream removed, broadcasting: 3
I0212 22:03:24.355186       9 log.go:172] (0xc0027da630) Data frame received for 1
I0212 22:03:24.355197       9 log.go:172] (0xc0014adb80) (1) Data frame handling
I0212 22:03:24.355205       9 log.go:172] (0xc0014adb80) (1) Data frame sent
I0212 22:03:24.355212       9 log.go:172] (0xc0027da630) (0xc0014adb80) Stream removed, broadcasting: 1
I0212 22:03:24.355284       9 log.go:172] (0xc0027da630) (0xc000bcc1e0) Stream removed, broadcasting: 5
I0212 22:03:24.355302       9 log.go:172] (0xc0027da630) (0xc0014adb80) Stream removed, broadcasting: 1
I0212 22:03:24.355309       9 log.go:172] (0xc0027da630) (0xc000311400) Stream removed, broadcasting: 3
I0212 22:03:24.355314       9 log.go:172] (0xc0027da630) (0xc000bcc1e0) Stream removed, broadcasting: 5
Feb 12 22:03:24.355: INFO: Exec stderr: ""
Feb 12 22:03:24.355: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:24.355: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:24.355802       9 log.go:172] (0xc0027da630) Go away received
I0212 22:03:24.395652       9 log.go:172] (0xc002909760) (0xc000e8f0e0) Create stream
I0212 22:03:24.395700       9 log.go:172] (0xc002909760) (0xc000e8f0e0) Stream added, broadcasting: 1
I0212 22:03:24.410432       9 log.go:172] (0xc002909760) Reply frame received for 1
I0212 22:03:24.410472       9 log.go:172] (0xc002909760) (0xc000311ea0) Create stream
I0212 22:03:24.410484       9 log.go:172] (0xc002909760) (0xc000311ea0) Stream added, broadcasting: 3
I0212 22:03:24.413179       9 log.go:172] (0xc002909760) Reply frame received for 3
I0212 22:03:24.413205       9 log.go:172] (0xc002909760) (0xc0011de280) Create stream
I0212 22:03:24.413215       9 log.go:172] (0xc002909760) (0xc0011de280) Stream added, broadcasting: 5
I0212 22:03:24.415177       9 log.go:172] (0xc002909760) Reply frame received for 5
I0212 22:03:24.477940       9 log.go:172] (0xc002909760) Data frame received for 3
I0212 22:03:24.478042       9 log.go:172] (0xc000311ea0) (3) Data frame handling
I0212 22:03:24.478072       9 log.go:172] (0xc000311ea0) (3) Data frame sent
I0212 22:03:24.582369       9 log.go:172] (0xc002909760) (0xc000311ea0) Stream removed, broadcasting: 3
I0212 22:03:24.582479       9 log.go:172] (0xc002909760) (0xc0011de280) Stream removed, broadcasting: 5
I0212 22:03:24.582520       9 log.go:172] (0xc002909760) Data frame received for 1
I0212 22:03:24.582592       9 log.go:172] (0xc000e8f0e0) (1) Data frame handling
I0212 22:03:24.582631       9 log.go:172] (0xc000e8f0e0) (1) Data frame sent
I0212 22:03:24.582660       9 log.go:172] (0xc002909760) (0xc000e8f0e0) Stream removed, broadcasting: 1
I0212 22:03:24.582690       9 log.go:172] (0xc002909760) Go away received
I0212 22:03:24.582819       9 log.go:172] (0xc002909760) (0xc000e8f0e0) Stream removed, broadcasting: 1
I0212 22:03:24.582842       9 log.go:172] (0xc002909760) (0xc000311ea0) Stream removed, broadcasting: 3
I0212 22:03:24.582848       9 log.go:172] (0xc002909760) (0xc0011de280) Stream removed, broadcasting: 5
Feb 12 22:03:24.582: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 12 22:03:24.582: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:24.583: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:24.642586       9 log.go:172] (0xc002909ad0) (0xc000e8fa40) Create stream
I0212 22:03:24.642720       9 log.go:172] (0xc002909ad0) (0xc000e8fa40) Stream added, broadcasting: 1
I0212 22:03:24.646263       9 log.go:172] (0xc002909ad0) Reply frame received for 1
I0212 22:03:24.646289       9 log.go:172] (0xc002909ad0) (0xc0014adf40) Create stream
I0212 22:03:24.646297       9 log.go:172] (0xc002909ad0) (0xc0014adf40) Stream added, broadcasting: 3
I0212 22:03:24.647599       9 log.go:172] (0xc002909ad0) Reply frame received for 3
I0212 22:03:24.647618       9 log.go:172] (0xc002909ad0) (0xc0011de8c0) Create stream
I0212 22:03:24.647625       9 log.go:172] (0xc002909ad0) (0xc0011de8c0) Stream added, broadcasting: 5
I0212 22:03:24.652857       9 log.go:172] (0xc002909ad0) Reply frame received for 5
I0212 22:03:24.737490       9 log.go:172] (0xc002909ad0) Data frame received for 3
I0212 22:03:24.737546       9 log.go:172] (0xc0014adf40) (3) Data frame handling
I0212 22:03:24.737560       9 log.go:172] (0xc0014adf40) (3) Data frame sent
I0212 22:03:24.809776       9 log.go:172] (0xc002909ad0) Data frame received for 1
I0212 22:03:24.809854       9 log.go:172] (0xc002909ad0) (0xc0014adf40) Stream removed, broadcasting: 3
I0212 22:03:24.809882       9 log.go:172] (0xc000e8fa40) (1) Data frame handling
I0212 22:03:24.809892       9 log.go:172] (0xc002909ad0) (0xc0011de8c0) Stream removed, broadcasting: 5
I0212 22:03:24.809915       9 log.go:172] (0xc000e8fa40) (1) Data frame sent
I0212 22:03:24.809929       9 log.go:172] (0xc002909ad0) (0xc000e8fa40) Stream removed, broadcasting: 1
I0212 22:03:24.809942       9 log.go:172] (0xc002909ad0) Go away received
I0212 22:03:24.810039       9 log.go:172] (0xc002909ad0) (0xc000e8fa40) Stream removed, broadcasting: 1
I0212 22:03:24.810059       9 log.go:172] (0xc002909ad0) (0xc0014adf40) Stream removed, broadcasting: 3
I0212 22:03:24.810082       9 log.go:172] (0xc002909ad0) (0xc0011de8c0) Stream removed, broadcasting: 5
Feb 12 22:03:24.810: INFO: Exec stderr: ""
Feb 12 22:03:24.810: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:24.810: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:24.843575       9 log.go:172] (0xc0027dadc0) (0xc0016ce460) Create stream
I0212 22:03:24.843770       9 log.go:172] (0xc0027dadc0) (0xc0016ce460) Stream added, broadcasting: 1
I0212 22:03:24.852889       9 log.go:172] (0xc0027dadc0) Reply frame received for 1
I0212 22:03:24.852935       9 log.go:172] (0xc0027dadc0) (0xc0011de960) Create stream
I0212 22:03:24.852941       9 log.go:172] (0xc0027dadc0) (0xc0011de960) Stream added, broadcasting: 3
I0212 22:03:24.855104       9 log.go:172] (0xc0027dadc0) Reply frame received for 3
I0212 22:03:24.855179       9 log.go:172] (0xc0027dadc0) (0xc001be05a0) Create stream
I0212 22:03:24.855198       9 log.go:172] (0xc0027dadc0) (0xc001be05a0) Stream added, broadcasting: 5
I0212 22:03:24.856799       9 log.go:172] (0xc0027dadc0) Reply frame received for 5
I0212 22:03:24.954747       9 log.go:172] (0xc0027dadc0) Data frame received for 3
I0212 22:03:24.954844       9 log.go:172] (0xc0011de960) (3) Data frame handling
I0212 22:03:24.954907       9 log.go:172] (0xc0011de960) (3) Data frame sent
I0212 22:03:25.051411       9 log.go:172] (0xc0027dadc0) Data frame received for 1
I0212 22:03:25.051521       9 log.go:172] (0xc0027dadc0) (0xc0011de960) Stream removed, broadcasting: 3
I0212 22:03:25.051673       9 log.go:172] (0xc0016ce460) (1) Data frame handling
I0212 22:03:25.051757       9 log.go:172] (0xc0016ce460) (1) Data frame sent
I0212 22:03:25.051805       9 log.go:172] (0xc0027dadc0) (0xc0016ce460) Stream removed, broadcasting: 1
I0212 22:03:25.051884       9 log.go:172] (0xc0027dadc0) (0xc001be05a0) Stream removed, broadcasting: 5
I0212 22:03:25.052076       9 log.go:172] (0xc0027dadc0) (0xc0016ce460) Stream removed, broadcasting: 1
I0212 22:03:25.052137       9 log.go:172] (0xc0027dadc0) (0xc0011de960) Stream removed, broadcasting: 3
I0212 22:03:25.052154       9 log.go:172] (0xc0027dadc0) (0xc001be05a0) Stream removed, broadcasting: 5
I0212 22:03:25.053462       9 log.go:172] (0xc0027dadc0) Go away received
Feb 12 22:03:25.053: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 12 22:03:25.053: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:25.053: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:25.098419       9 log.go:172] (0xc002c5c8f0) (0xc001be12c0) Create stream
I0212 22:03:25.098479       9 log.go:172] (0xc002c5c8f0) (0xc001be12c0) Stream added, broadcasting: 1
I0212 22:03:25.101835       9 log.go:172] (0xc002c5c8f0) Reply frame received for 1
I0212 22:03:25.101866       9 log.go:172] (0xc002c5c8f0) (0xc0016ce5a0) Create stream
I0212 22:03:25.101876       9 log.go:172] (0xc002c5c8f0) (0xc0016ce5a0) Stream added, broadcasting: 3
I0212 22:03:25.102973       9 log.go:172] (0xc002c5c8f0) Reply frame received for 3
I0212 22:03:25.102992       9 log.go:172] (0xc002c5c8f0) (0xc000e8ff40) Create stream
I0212 22:03:25.103000       9 log.go:172] (0xc002c5c8f0) (0xc000e8ff40) Stream added, broadcasting: 5
I0212 22:03:25.104097       9 log.go:172] (0xc002c5c8f0) Reply frame received for 5
I0212 22:03:25.180057       9 log.go:172] (0xc002c5c8f0) Data frame received for 3
I0212 22:03:25.180270       9 log.go:172] (0xc0016ce5a0) (3) Data frame handling
I0212 22:03:25.180313       9 log.go:172] (0xc0016ce5a0) (3) Data frame sent
I0212 22:03:25.255418       9 log.go:172] (0xc002c5c8f0) (0xc0016ce5a0) Stream removed, broadcasting: 3
I0212 22:03:25.255519       9 log.go:172] (0xc002c5c8f0) Data frame received for 1
I0212 22:03:25.255548       9 log.go:172] (0xc001be12c0) (1) Data frame handling
I0212 22:03:25.255582       9 log.go:172] (0xc001be12c0) (1) Data frame sent
I0212 22:03:25.255634       9 log.go:172] (0xc002c5c8f0) (0xc001be12c0) Stream removed, broadcasting: 1
I0212 22:03:25.255749       9 log.go:172] (0xc002c5c8f0) (0xc000e8ff40) Stream removed, broadcasting: 5
I0212 22:03:25.255808       9 log.go:172] (0xc002c5c8f0) (0xc001be12c0) Stream removed, broadcasting: 1
I0212 22:03:25.255870       9 log.go:172] (0xc002c5c8f0) (0xc0016ce5a0) Stream removed, broadcasting: 3
I0212 22:03:25.255882       9 log.go:172] (0xc002c5c8f0) (0xc000e8ff40) Stream removed, broadcasting: 5
I0212 22:03:25.256167       9 log.go:172] (0xc002c5c8f0) Go away received
Feb 12 22:03:25.256: INFO: Exec stderr: ""
Feb 12 22:03:25.256: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:25.256: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:25.299053       9 log.go:172] (0xc0027db4a0) (0xc0016ce780) Create stream
I0212 22:03:25.299111       9 log.go:172] (0xc0027db4a0) (0xc0016ce780) Stream added, broadcasting: 1
I0212 22:03:25.301980       9 log.go:172] (0xc0027db4a0) Reply frame received for 1
I0212 22:03:25.302006       9 log.go:172] (0xc0027db4a0) (0xc0011df2c0) Create stream
I0212 22:03:25.302016       9 log.go:172] (0xc0027db4a0) (0xc0011df2c0) Stream added, broadcasting: 3
I0212 22:03:25.303310       9 log.go:172] (0xc0027db4a0) Reply frame received for 3
I0212 22:03:25.303334       9 log.go:172] (0xc0027db4a0) (0xc0016ceb40) Create stream
I0212 22:03:25.303354       9 log.go:172] (0xc0027db4a0) (0xc0016ceb40) Stream added, broadcasting: 5
I0212 22:03:25.305581       9 log.go:172] (0xc0027db4a0) Reply frame received for 5
I0212 22:03:25.378729       9 log.go:172] (0xc0027db4a0) Data frame received for 3
I0212 22:03:25.378838       9 log.go:172] (0xc0011df2c0) (3) Data frame handling
I0212 22:03:25.378870       9 log.go:172] (0xc0011df2c0) (3) Data frame sent
I0212 22:03:25.447324       9 log.go:172] (0xc0027db4a0) Data frame received for 1
I0212 22:03:25.447365       9 log.go:172] (0xc0016ce780) (1) Data frame handling
I0212 22:03:25.447378       9 log.go:172] (0xc0016ce780) (1) Data frame sent
I0212 22:03:25.447390       9 log.go:172] (0xc0027db4a0) (0xc0016ce780) Stream removed, broadcasting: 1
I0212 22:03:25.447816       9 log.go:172] (0xc0027db4a0) (0xc0011df2c0) Stream removed, broadcasting: 3
I0212 22:03:25.447923       9 log.go:172] (0xc0027db4a0) (0xc0016ceb40) Stream removed, broadcasting: 5
I0212 22:03:25.447956       9 log.go:172] (0xc0027db4a0) (0xc0016ce780) Stream removed, broadcasting: 1
I0212 22:03:25.447966       9 log.go:172] (0xc0027db4a0) (0xc0011df2c0) Stream removed, broadcasting: 3
I0212 22:03:25.447974       9 log.go:172] (0xc0027db4a0) (0xc0016ceb40) Stream removed, broadcasting: 5
Feb 12 22:03:25.447: INFO: Exec stderr: ""
Feb 12 22:03:25.448: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
I0212 22:03:25.448032       9 log.go:172] (0xc0027db4a0) Go away received
Feb 12 22:03:25.448: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:25.481658       9 log.go:172] (0xc002844630) (0xc0011dfae0) Create stream
I0212 22:03:25.481806       9 log.go:172] (0xc002844630) (0xc0011dfae0) Stream added, broadcasting: 1
I0212 22:03:25.484253       9 log.go:172] (0xc002844630) Reply frame received for 1
I0212 22:03:25.484277       9 log.go:172] (0xc002844630) (0xc00103a0a0) Create stream
I0212 22:03:25.484284       9 log.go:172] (0xc002844630) (0xc00103a0a0) Stream added, broadcasting: 3
I0212 22:03:25.485060       9 log.go:172] (0xc002844630) Reply frame received for 3
I0212 22:03:25.485076       9 log.go:172] (0xc002844630) (0xc00103a3c0) Create stream
I0212 22:03:25.485083       9 log.go:172] (0xc002844630) (0xc00103a3c0) Stream added, broadcasting: 5
I0212 22:03:25.485918       9 log.go:172] (0xc002844630) Reply frame received for 5
I0212 22:03:25.553757       9 log.go:172] (0xc002844630) Data frame received for 3
I0212 22:03:25.553860       9 log.go:172] (0xc00103a0a0) (3) Data frame handling
I0212 22:03:25.553915       9 log.go:172] (0xc00103a0a0) (3) Data frame sent
I0212 22:03:25.635916       9 log.go:172] (0xc002844630) (0xc00103a0a0) Stream removed, broadcasting: 3
I0212 22:03:25.635982       9 log.go:172] (0xc002844630) Data frame received for 1
I0212 22:03:25.635997       9 log.go:172] (0xc002844630) (0xc00103a3c0) Stream removed, broadcasting: 5
I0212 22:03:25.636014       9 log.go:172] (0xc0011dfae0) (1) Data frame handling
I0212 22:03:25.636030       9 log.go:172] (0xc0011dfae0) (1) Data frame sent
I0212 22:03:25.636041       9 log.go:172] (0xc002844630) (0xc0011dfae0) Stream removed, broadcasting: 1
I0212 22:03:25.636050       9 log.go:172] (0xc002844630) Go away received
I0212 22:03:25.636273       9 log.go:172] (0xc002844630) (0xc0011dfae0) Stream removed, broadcasting: 1
I0212 22:03:25.636286       9 log.go:172] (0xc002844630) (0xc00103a0a0) Stream removed, broadcasting: 3
I0212 22:03:25.636292       9 log.go:172] (0xc002844630) (0xc00103a3c0) Stream removed, broadcasting: 5
Feb 12 22:03:25.636: INFO: Exec stderr: ""
Feb 12 22:03:25.636: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1231 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:03:25.636: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:03:25.676361       9 log.go:172] (0xc0027dbc30) (0xc0016cefa0) Create stream
I0212 22:03:25.676559       9 log.go:172] (0xc0027dbc30) (0xc0016cefa0) Stream added, broadcasting: 1
I0212 22:03:25.683205       9 log.go:172] (0xc0027dbc30) Reply frame received for 1
I0212 22:03:25.683301       9 log.go:172] (0xc0027dbc30) (0xc00103a960) Create stream
I0212 22:03:25.683311       9 log.go:172] (0xc0027dbc30) (0xc00103a960) Stream added, broadcasting: 3
I0212 22:03:25.686879       9 log.go:172] (0xc0027dbc30) Reply frame received for 3
I0212 22:03:25.686931       9 log.go:172] (0xc0027dbc30) (0xc0011dfc20) Create stream
I0212 22:03:25.686943       9 log.go:172] (0xc0027dbc30) (0xc0011dfc20) Stream added, broadcasting: 5
I0212 22:03:25.688653       9 log.go:172] (0xc0027dbc30) Reply frame received for 5
I0212 22:03:25.757690       9 log.go:172] (0xc0027dbc30) Data frame received for 3
I0212 22:03:25.757741       9 log.go:172] (0xc00103a960) (3) Data frame handling
I0212 22:03:25.757769       9 log.go:172] (0xc00103a960) (3) Data frame sent
I0212 22:03:25.831503       9 log.go:172] (0xc0027dbc30) (0xc00103a960) Stream removed, broadcasting: 3
I0212 22:03:25.831596       9 log.go:172] (0xc0027dbc30) Data frame received for 1
I0212 22:03:25.831622       9 log.go:172] (0xc0027dbc30) (0xc0011dfc20) Stream removed, broadcasting: 5
I0212 22:03:25.831667       9 log.go:172] (0xc0016cefa0) (1) Data frame handling
I0212 22:03:25.831684       9 log.go:172] (0xc0016cefa0) (1) Data frame sent
I0212 22:03:25.831722       9 log.go:172] (0xc0027dbc30) (0xc0016cefa0) Stream removed, broadcasting: 1
I0212 22:03:25.831738       9 log.go:172] (0xc0027dbc30) Go away received
I0212 22:03:25.831816       9 log.go:172] (0xc0027dbc30) (0xc0016cefa0) Stream removed, broadcasting: 1
I0212 22:03:25.831830       9 log.go:172] (0xc0027dbc30) (0xc00103a960) Stream removed, broadcasting: 3
I0212 22:03:25.831837       9 log.go:172] (0xc0027dbc30) (0xc0011dfc20) Stream removed, broadcasting: 5
Feb 12 22:03:25.831: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:03:25.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-1231" for this suite.

• [SLOW TEST:22.488 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":234,"skipped":3973,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:03:25.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-1dc1dc9d-d27f-4b52-9dd2-0192ad2c6659
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:03:25.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9287" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":277,"completed":235,"skipped":3974,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:03:26.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
Feb 12 22:03:26.066: INFO: Waiting up to 5m0s for pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec" in namespace "var-expansion-3734" to be "Succeeded or Failed"
Feb 12 22:03:26.156: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Pending", Reason="", readiness=false. Elapsed: 89.70533ms
Feb 12 22:03:28.180: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113914555s
Feb 12 22:03:30.208: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142159645s
Feb 12 22:03:32.313: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246919186s
Feb 12 22:03:34.321: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.255127431s
Feb 12 22:03:36.329: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.26268485s
STEP: Saw pod success
Feb 12 22:03:36.329: INFO: Pod "var-expansion-7e130535-47de-4779-a290-22eb499223ec" satisfied condition "Succeeded or Failed"
Feb 12 22:03:36.334: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod var-expansion-7e130535-47de-4779-a290-22eb499223ec container dapi-container: 
STEP: delete the pod
Feb 12 22:03:36.843: INFO: Waiting for pod var-expansion-7e130535-47de-4779-a290-22eb499223ec to disappear
Feb 12 22:03:37.066: INFO: Pod var-expansion-7e130535-47de-4779-a290-22eb499223ec no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:03:37.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3734" for this suite.

• [SLOW TEST:11.130 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":277,"completed":236,"skipped":3987,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:03:37.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 12 22:03:37.203: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:04:03.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7566" for this suite.

• [SLOW TEST:25.969 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":277,"completed":237,"skipped":3993,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:04:03.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 12 22:04:03.207: INFO: Waiting up to 5m0s for pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504" in namespace "emptydir-1894" to be "Succeeded or Failed"
Feb 12 22:04:03.216: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Pending", Reason="", readiness=false. Elapsed: 8.263581ms
Feb 12 22:04:05.223: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015385574s
Feb 12 22:04:07.228: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020198819s
Feb 12 22:04:09.536: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Pending", Reason="", readiness=false. Elapsed: 6.328302651s
Feb 12 22:04:11.539: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Pending", Reason="", readiness=false. Elapsed: 8.331881325s
Feb 12 22:04:13.546: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.338829794s
STEP: Saw pod success
Feb 12 22:04:13.546: INFO: Pod "pod-b8ea4ef2-0c18-4154-ad96-ce804741e504" satisfied condition "Succeeded or Failed"
Feb 12 22:04:13.551: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-b8ea4ef2-0c18-4154-ad96-ce804741e504 container test-container: 
STEP: delete the pod
Feb 12 22:04:13.726: INFO: Waiting for pod pod-b8ea4ef2-0c18-4154-ad96-ce804741e504 to disappear
Feb 12 22:04:14.135: INFO: Pod pod-b8ea4ef2-0c18-4154-ad96-ce804741e504 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:04:14.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1894" for this suite.

• [SLOW TEST:11.066 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":238,"skipped":4051,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:04:14.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:04:14.494: INFO: Waiting up to 5m0s for pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7" in namespace "security-context-test-3232" to be "Succeeded or Failed"
Feb 12 22:04:14.500: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.986641ms
Feb 12 22:04:16.506: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011887279s
Feb 12 22:04:18.531: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036291634s
Feb 12 22:04:20.541: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046414326s
Feb 12 22:04:22.553: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05823324s
Feb 12 22:04:22.553: INFO: Pod "busybox-user-65534-addeeadb-0ea9-44da-85d7-11cfa5a604d7" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:04:22.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3232" for this suite.

• [SLOW TEST:8.496 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":239,"skipped":4066,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:04:22.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:04:31.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9598" for this suite.

• [SLOW TEST:8.667 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":277,"completed":240,"skipped":4069,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:04:31.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 12 22:04:32.139: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 12 22:04:34.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141871, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:04:36.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141871, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:04:38.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141871, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:04:40.160: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141872, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141871, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 22:04:43.200: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:04:43.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:04:45.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-721" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:14.518 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":277,"completed":241,"skipped":4079,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:04:45.854: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 12 22:05:00.110: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:00.129: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:02.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:02.146: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:04.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:04.206: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:06.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:06.135: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:08.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:08.154: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:10.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:10.137: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:12.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:12.139: INFO: Pod pod-with-prestop-http-hook still exists
Feb 12 22:05:14.129: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 12 22:05:14.134: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:05:14.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7698" for this suite.

• [SLOW TEST:28.321 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":277,"completed":242,"skipped":4119,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:05:14.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-4f5249f1-e0c1-404b-a8ea-bae886f8fa3f
STEP: Creating a pod to test consume configMaps
Feb 12 22:05:14.295: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f" in namespace "projected-3362" to be "Succeeded or Failed"
Feb 12 22:05:14.301: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821039ms
Feb 12 22:05:17.077: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.782625604s
Feb 12 22:05:19.083: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788166812s
Feb 12 22:05:21.092: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.796830639s
Feb 12 22:05:23.096: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.801295929s
Feb 12 22:05:25.100: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.805272473s
STEP: Saw pod success
Feb 12 22:05:25.100: INFO: Pod "pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f" satisfied condition "Succeeded or Failed"
Feb 12 22:05:25.103: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 22:05:25.133: INFO: Waiting for pod pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f to disappear
Feb 12 22:05:25.138: INFO: Pod pod-projected-configmaps-4c75019f-5651-440d-b084-7b0551be6a4f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:05:25.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3362" for this suite.

• [SLOW TEST:10.976 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":243,"skipped":4149,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:05:25.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:05:25.368: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628" in namespace "security-context-test-2648" to be "Succeeded or Failed"
Feb 12 22:05:25.419: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Pending", Reason="", readiness=false. Elapsed: 50.680983ms
Feb 12 22:05:27.425: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057209067s
Feb 12 22:05:29.433: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065250018s
Feb 12 22:05:31.448: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Pending", Reason="", readiness=false. Elapsed: 6.080137975s
Feb 12 22:05:33.456: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088109858s
Feb 12 22:05:35.548: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.179652839s
Feb 12 22:05:35.548: INFO: Pod "alpine-nnp-false-590798fb-ced7-4d3f-b018-f9fa628cd628" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:05:35.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2648" for this suite.

• [SLOW TEST:10.472 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":244,"skipped":4153,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:05:35.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 22:05:36.640: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 22:05:38.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:05:40.667: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:05:42.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717141936, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 22:05:45.728: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:05:46.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7674" for this suite.
STEP: Destroying namespace "webhook-7674-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.645 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":277,"completed":245,"skipped":4154,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:05:46.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:05:56.415: INFO: Waiting up to 5m0s for pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9" in namespace "pods-8046" to be "Succeeded or Failed"
Feb 12 22:05:56.421: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216399ms
Feb 12 22:05:58.427: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012656669s
Feb 12 22:06:00.435: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019818593s
Feb 12 22:06:02.444: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028829005s
Feb 12 22:06:04.450: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.035601687s
STEP: Saw pod success
Feb 12 22:06:04.450: INFO: Pod "client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9" satisfied condition "Succeeded or Failed"
Feb 12 22:06:04.454: INFO: Trying to get logs from node jerma-node pod client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9 container env3cont: 
STEP: delete the pod
Feb 12 22:06:04.541: INFO: Waiting for pod client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9 to disappear
Feb 12 22:06:04.552: INFO: Pod client-envvars-760adf5c-542d-427d-82ae-7a02503c21c9 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:06:04.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8046" for this suite.

• [SLOW TEST:18.292 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":277,"completed":246,"skipped":4175,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:06:04.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-581b6521-f6c2-48da-b535-eb15b43595da
STEP: Creating a pod to test consume configMaps
Feb 12 22:06:04.742: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2" in namespace "projected-1046" to be "Succeeded or Failed"
Feb 12 22:06:04.749: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.927331ms
Feb 12 22:06:06.756: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014414763s
Feb 12 22:06:09.692: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.950261327s
Feb 12 22:06:11.698: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.956409713s
Feb 12 22:06:13.718: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.975836762s
Feb 12 22:06:15.727: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.984669325s
STEP: Saw pod success
Feb 12 22:06:15.727: INFO: Pod "pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2" satisfied condition "Succeeded or Failed"
Feb 12 22:06:15.735: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 22:06:15.816: INFO: Waiting for pod pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2 to disappear
Feb 12 22:06:15.831: INFO: Pod pod-projected-configmaps-fdaa47c9-72a1-402f-b05d-eb1c8c5ffbe2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:06:15.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1046" for this suite.

• [SLOW TEST:11.290 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":247,"skipped":4184,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:06:15.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:06:15.971: INFO: Creating ReplicaSet my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c
Feb 12 22:06:16.008: INFO: Pod name my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c: Found 0 pods out of 1
Feb 12 22:06:21.018: INFO: Pod name my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c: Found 1 pods out of 1
Feb 12 22:06:21.018: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c" is running
Feb 12 22:06:25.031: INFO: Pod "my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c-gwjpx" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 22:06:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 22:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 22:06:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 22:06:16 +0000 UTC Reason: Message:}])
Feb 12 22:06:25.031: INFO: Trying to dial the pod
Feb 12 22:06:30.046: INFO: Controller my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c: Got expected result from replica 1 [my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c-gwjpx]: "my-hostname-basic-21a29180-af14-4ab9-be44-ea21512ef77c-gwjpx", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:06:30.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8070" for this suite.

• [SLOW TEST:14.199 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":277,"completed":248,"skipped":4186,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:06:30.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:06:30.148: INFO: Creating deployment "webserver-deployment"
Feb 12 22:06:30.155: INFO: Waiting for observed generation 1
Feb 12 22:06:32.166: INFO: Waiting for all required pods to come up
Feb 12 22:06:32.175: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 12 22:06:58.194: INFO: Waiting for deployment "webserver-deployment" to complete
Feb 12 22:06:58.203: INFO: Updating deployment "webserver-deployment" with a non-existent image
Feb 12 22:06:58.222: INFO: Updating deployment webserver-deployment
Feb 12 22:06:58.222: INFO: Waiting for observed generation 2
Feb 12 22:07:00.860: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 12 22:07:01.527: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 12 22:07:01.587: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 12 22:07:02.079: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 12 22:07:02.079: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 12 22:07:02.082: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Feb 12 22:07:02.099: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Feb 12 22:07:02.099: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Feb 12 22:07:02.105: INFO: Updating deployment webserver-deployment
Feb 12 22:07:02.105: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Feb 12 22:07:03.200: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 12 22:07:03.807: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 12 22:07:04.069: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5669 /apis/apps/v1/namespaces/deployment-5669/deployments/webserver-deployment 052aa90d-5c17-4478-9a74-add2d2232b4a 8033866 3 2020-02-12 22:06:30 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a42dd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-12 22:06:58 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-12 22:07:03 +0000 UTC,LastTransitionTime:2020-02-12 22:07:03 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Feb 12 22:07:05.015: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-5669 /apis/apps/v1/namespaces/deployment-5669/replicasets/webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 8033863 3 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 052aa90d-5c17-4478-9a74-add2d2232b4a 0xc00396bd67 0xc00396bd68}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00396bdd8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:07:05.015: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Feb 12 22:07:05.016: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-5669 /apis/apps/v1/namespaces/deployment-5669/replicasets/webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 8033861 3 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 052aa90d-5c17-4478-9a74-add2d2232b4a 0xc00396bca7 0xc00396bca8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00396bd08  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:07:08.675: INFO: Pod "webserver-deployment-595b5b9587-26swn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-26swn webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-26swn 194c3201-e32f-4562-b18d-28151077ba33 8033900 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057922b7 0xc0057922b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.676: INFO: Pod "webserver-deployment-595b5b9587-4mqtz" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-4mqtz webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-4mqtz cc478eaa-a899-4e33-afd8-91cb0405808b 8033741 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057923b0 0xc0057923b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e841b416e3a36ec8c42caba064511d5b0f3de20e8b27c6c9e2bb54645dc98025,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.676: INFO: Pod "webserver-deployment-595b5b9587-5rmdv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5rmdv webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-5rmdv 36fd9dc9-8c05-41f9-b6b4-1d7dd5010299 8033793 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792520 0xc005792521}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://738b9ac6565ff1b66f87630bed1a476c7e7655fa506cf30f83ad60b9bb717cb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.676: INFO: Pod "webserver-deployment-595b5b9587-62z8h" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-62z8h webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-62z8h bd5fd392-e67d-48fb-8981-30747fd475ee 8033768 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792690 0xc005792691}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c4a172e03948ae2ddbed2af83f7eccb1567aa878a12dee877eddd0146b1dcdfd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.677: INFO: Pod "webserver-deployment-595b5b9587-6ln9l" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ln9l webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-6ln9l 37a21d05-1b9c-41c5-879f-6c23634039cb 8033790 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792800 0xc005792801}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://94eb4d12512d6f7e1829e243531a2c1e3573a5a40b584aa92df3fb9753ea8d86,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.677: INFO: Pod "webserver-deployment-595b5b9587-bvndf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bvndf webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-bvndf 1e27b476-81fc-4272-8a2f-45c9ce326550 8033885 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792960 0xc005792961}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.678: INFO: Pod "webserver-deployment-595b5b9587-ct867" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ct867 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-ct867 3e69793c-ff13-4812-9568-804e2ca45622 8033771 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792a77 0xc005792a78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://1c34c2d0241de5bb2a3a3907eadceae45026805db66fc608b87b703e77cc64cd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.678: INFO: Pod "webserver-deployment-595b5b9587-ffjml" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ffjml webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-ffjml eb09b6e8-358a-4c6d-983d-8a8926478445 8033908 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792bf0 0xc005792bf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.678: INFO: Pod "webserver-deployment-595b5b9587-fkth8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fkth8 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-fkth8 905bdc4e-e533-4a7e-8410-9305ef8a2abf 8033907 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792d07 0xc005792d08}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.679: INFO: Pod "webserver-deployment-595b5b9587-flqbs" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-flqbs webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-flqbs 1c9e173c-f90d-4ff8-9b7d-56f64bf9b542 8033909 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792e27 0xc005792e28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.679: INFO: Pod "webserver-deployment-595b5b9587-gr9g4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-gr9g4 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-gr9g4 5c78e881-a695-4750-933d-4ca9d1d3db0b 8033902 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005792f57 0xc005792f58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.679: INFO: Pod "webserver-deployment-595b5b9587-hqxbw" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-hqxbw webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-hqxbw e8271598-838b-4e7e-ad6e-0b01c23d82dc 8033905 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005793050 0xc005793051}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.680: INFO: Pod "webserver-deployment-595b5b9587-kxpwn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kxpwn webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-kxpwn 1738433f-f0cb-4f4f-b28d-e5ac63c03fa9 8033899 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005793157 0xc005793158}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.680: INFO: Pod "webserver-deployment-595b5b9587-nh8c7" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nh8c7 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-nh8c7 f456e616-ffb9-4671-8937-a9a634c064be 8033787 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005793250 0xc005793251}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://265bc22609327cae5ecd5acd4901e52f14f81016426225ed5c2bc95b7455bf9e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.681: INFO: Pod "webserver-deployment-595b5b9587-nzz66" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-nzz66 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-nzz66 55026238-b398-434a-bf58-4ed4c232eff4 8033881 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057933b0 0xc0057933b1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.682: INFO: Pod "webserver-deployment-595b5b9587-phdbv" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-phdbv webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-phdbv 4d4bb50a-0511-4986-8fd1-53a83a42b55d 8033774 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057934b7 0xc0057934b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dd7d8e995059b4de6cdb63f606b7bca6bb2d7c76910e255985f452e6c64c3cb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.682: INFO: Pod "webserver-deployment-595b5b9587-ptrvk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ptrvk webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-ptrvk 362da3a3-5d5d-4155-8c50-120485067c0e 8033777 0 2020-02-12 22:06:30 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc005793630 0xc005793631}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-02-12 22:06:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:06:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ac4be6c708dd9d23a5c0e747538bcb33dbd5b02d3574a3143902d6e1d138c4a8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.683: INFO: Pod "webserver-deployment-595b5b9587-q6cl8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q6cl8 webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-q6cl8 e66a924f-7402-43be-a4c7-996eb6038fa1 8033876 0 2020-02-12 22:07:03 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057937a0 0xc0057937a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.683: INFO: Pod "webserver-deployment-595b5b9587-qsr7t" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qsr7t webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-qsr7t 7ffe7efe-67f5-44c6-a214-7d064f5c5cda 8033901 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057938a7 0xc0057938a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.683: INFO: Pod "webserver-deployment-595b5b9587-xkz2m" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xkz2m webserver-deployment-595b5b9587- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-595b5b9587-xkz2m 6394aba6-3d5d-4132-8e2c-6e254cca5b1d 8033903 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f8b588a6-f015-49be-863f-ed2e445ec3f4 0xc0057939a0 0xc0057939a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.684: INFO: Pod "webserver-deployment-c7997dcc8-475qt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-475qt webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-475qt fe2ae627-8dc1-47db-9f74-15ed938296db 8033896 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc005793a90 0xc005793a91}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.684: INFO: Pod "webserver-deployment-c7997dcc8-4x2tn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4x2tn webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-4x2tn 4465ee7f-175c-47b6-8460-96e25601b1ea 8033898 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc005793bb7 0xc005793bb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.684: INFO: Pod "webserver-deployment-c7997dcc8-brrcd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-brrcd webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-brrcd 0e678690-54da-46c8-888e-cf9ec3780b4c 8033877 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc005793cf7 0xc005793cf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.684: INFO: Pod "webserver-deployment-c7997dcc8-fxv2z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fxv2z webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-fxv2z a2c41f9f-db97-481b-ad32-118ea175d816 8033904 0 2020-02-12 22:07:05 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc005793e27 0xc005793e28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-g5lhd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g5lhd webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-g5lhd 690b4ea5-bb82-4030-9f02-c75def74ce07 8033897 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc005793f50 0xc005793f51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-gnp2c" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gnp2c webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-gnp2c 44c6cbd8-da9d-4986-babd-bb35ef21ee4b 8033851 0 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e077 0xc00423e078}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-12 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-hfd7j" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hfd7j webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-hfd7j f7bd8f10-7eaf-409b-b529-7f42ecdba3c5 8033829 0 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e207 0xc00423e208}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-12 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-jrjwt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jrjwt webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-jrjwt 9b3c7faf-e869-4798-8584-a8c4ada2a596 8033883 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e397 0xc00423e398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-mvpg7" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mvpg7 webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-mvpg7 4c3666b7-fe17-4778-acf3-b7fa7fd83fc2 8033825 0 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e4c7 0xc00423e4c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-12 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.685: INFO: Pod "webserver-deployment-c7997dcc8-svhgp" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-svhgp webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-svhgp c5cfbcfe-5619-441c-a8af-ccdb862f9210 8033846 0 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e637 0xc00423e638}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-12 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.686: INFO: Pod "webserver-deployment-c7997dcc8-twsbh" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-twsbh webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-twsbh b3fbcdd9-42af-4571-99f5-85037e682190 8033906 0 2020-02-12 22:07:04 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e7c7 0xc00423e7c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.686: INFO: Pod "webserver-deployment-c7997dcc8-vcfdc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vcfdc webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-vcfdc 0d38f48f-fbc6-4692-8f13-8123e52663ef 8033852 0 2020-02-12 22:06:58 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423e8f7 0xc00423e8f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:06:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-12 22:06:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 12 22:07:08.687: INFO: Pod "webserver-deployment-c7997dcc8-x8mbm" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x8mbm webserver-deployment-c7997dcc8- deployment-5669 /api/v1/namespaces/deployment-5669/pods/webserver-deployment-c7997dcc8-x8mbm 63503452-2abc-470a-8842-cf314e5bf659 8033870 0 2020-02-12 22:07:03 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 d6faee88-c8a7-40c6-943a-2515afaf481d 0xc00423ebe7 0xc00423ebe8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n72f5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n72f5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n72f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:07:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:07:08.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5669" for this suite.

• [SLOW TEST:42.258 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":277,"completed":249,"skipped":4186,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:07:12.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-e41134ba-0397-4408-9037-5f99a881282b
STEP: Creating a pod to test consume configMaps
Feb 12 22:07:25.240: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1" in namespace "configmap-7538" to be "Succeeded or Failed"
Feb 12 22:07:26.111: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 871.335201ms
Feb 12 22:07:28.161: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.92105341s
Feb 12 22:07:30.190: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.950327069s
Feb 12 22:07:32.474: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.234082314s
Feb 12 22:07:35.618: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.378253926s
Feb 12 22:07:38.389: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 13.148750288s
Feb 12 22:07:40.929: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.688947273s
Feb 12 22:07:44.121: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 18.880782983s
Feb 12 22:07:48.112: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.872331165s
Feb 12 22:07:51.940: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.700468996s
Feb 12 22:07:54.066: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.826482068s
Feb 12 22:07:58.213: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 32.973349551s
Feb 12 22:08:01.542: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 36.302037324s
Feb 12 22:08:03.761: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 38.520796834s
Feb 12 22:08:05.939: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 40.699606674s
Feb 12 22:08:09.949: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.708752885s
Feb 12 22:08:13.480: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 48.240102584s
Feb 12 22:08:18.287: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.047313893s
Feb 12 22:08:20.487: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 55.247252993s
Feb 12 22:08:25.182: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 59.941686428s
Feb 12 22:08:27.541: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.30074465s
Feb 12 22:08:30.636: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.396472811s
Feb 12 22:08:32.873: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.633292458s
Feb 12 22:08:34.878: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.637930396s
Feb 12 22:08:36.906: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.665997521s
Feb 12 22:08:39.114: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.874192926s
Feb 12 22:08:42.308: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.068094963s
Feb 12 22:08:44.498: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.257905881s
Feb 12 22:08:46.515: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.274842995s
Feb 12 22:08:49.460: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.219685633s
Feb 12 22:08:51.466: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.226020988s
Feb 12 22:08:53.472: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.232411177s
Feb 12 22:08:55.480: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.239757556s
Feb 12 22:08:57.489: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m32.249645991s
STEP: Saw pod success
Feb 12 22:08:57.490: INFO: Pod "pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1" satisfied condition "Succeeded or Failed"
Feb 12 22:08:57.494: INFO: Trying to get logs from node jerma-node pod pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1 container configmap-volume-test: 
STEP: delete the pod
Feb 12 22:08:57.562: INFO: Waiting for pod pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1 to disappear
Feb 12 22:08:57.584: INFO: Pod pod-configmaps-cf5b53d9-a12f-4c20-a658-3233894fa0b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:08:57.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7538" for this suite.

• [SLOW TEST:105.290 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":277,"completed":250,"skipped":4217,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:08:57.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-84cdae4c-e651-46d4-99e3-159165052097
STEP: Creating a pod to test consume secrets
Feb 12 22:08:57.689: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574" in namespace "projected-4233" to be "Succeeded or Failed"
Feb 12 22:08:57.693: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722284ms
Feb 12 22:08:59.700: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011346745s
Feb 12 22:09:01.705: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016570674s
Feb 12 22:09:03.830: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141190831s
Feb 12 22:09:05.837: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148336826s
Feb 12 22:09:07.847: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157886942s
Feb 12 22:09:09.855: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.166387016s
STEP: Saw pod success
Feb 12 22:09:09.855: INFO: Pod "pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574" satisfied condition "Succeeded or Failed"
Feb 12 22:09:09.859: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574 container projected-secret-volume-test: 
STEP: delete the pod
Feb 12 22:09:09.893: INFO: Waiting for pod pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574 to disappear
Feb 12 22:09:09.901: INFO: Pod pod-projected-secrets-b6534499-4f5f-4157-b340-c75b66728574 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:09:09.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4233" for this suite.

• [SLOW TEST:12.316 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":277,"completed":251,"skipped":4250,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:09:09.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:09:21.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8641" for this suite.

• [SLOW TEST:11.372 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":277,"completed":252,"skipped":4251,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
S
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:09:21.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-b248aa88-2866-4f63-8c8e-863858edb507
STEP: Creating secret with name s-test-opt-upd-0b95607f-22bf-4836-8d2b-7dc359e9fccc
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-b248aa88-2866-4f63-8c8e-863858edb507
STEP: Updating secret s-test-opt-upd-0b95607f-22bf-4836-8d2b-7dc359e9fccc
STEP: Creating secret with name s-test-opt-create-075f9caf-d5e9-426e-a1f2-2446635b26ef
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:09:33.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7890" for this suite.

• [SLOW TEST:12.352 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":277,"completed":253,"skipped":4252,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:09:33.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 12 22:09:52.066: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 22:09:52.081: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 22:09:54.081: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 22:09:54.089: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 22:09:56.081: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 22:09:56.088: INFO: Pod pod-with-poststart-http-hook still exists
Feb 12 22:09:58.081: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 12 22:09:58.089: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:09:58.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-18" for this suite.

• [SLOW TEST:24.515 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":277,"completed":254,"skipped":4267,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:09:58.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 12 22:09:58.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4612'
Feb 12 22:10:01.008: INFO: stderr: ""
Feb 12 22:10:01.008: INFO: stdout: "replicationcontroller/httpd-rc created\n"
Feb 12 22:10:01.012: INFO: Waiting for rc httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Feb 12 22:10:01.029: INFO: Waiting for rc httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 12 22:10:01.078: INFO: scanned /root for discovery docs: 
Feb 12 22:10:01.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4612'
Feb 12 22:10:23.996: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 12 22:10:23.996: INFO: stdout: "Created httpd-rc-eea78a72e9bb3586c533895e21e1d7ad\nScaling up httpd-rc-eea78a72e9bb3586c533895e21e1d7ad from 0 to 1, scaling down httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling httpd-rc-eea78a72e9bb3586c533895e21e1d7ad up to 1\nScaling httpd-rc down to 0\nUpdate succeeded. Deleting old controller: httpd-rc\nRenaming httpd-rc-eea78a72e9bb3586c533895e21e1d7ad to httpd-rc\nreplicationcontroller/httpd-rc rolling updated\n"
Feb 12 22:10:23.996: INFO: stdout: "Created httpd-rc-eea78a72e9bb3586c533895e21e1d7ad\nScaling up httpd-rc-eea78a72e9bb3586c533895e21e1d7ad from 0 to 1, scaling down httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling httpd-rc-eea78a72e9bb3586c533895e21e1d7ad up to 1\nScaling httpd-rc down to 0\nUpdate succeeded. Deleting old controller: httpd-rc\nRenaming httpd-rc-eea78a72e9bb3586c533895e21e1d7ad to httpd-rc\nreplicationcontroller/httpd-rc rolling updated\n"
STEP: waiting for all containers in run=httpd-rc pods to come up.
Feb 12 22:10:23.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=httpd-rc --namespace=kubectl-4612'
Feb 12 22:10:24.124: INFO: stderr: ""
Feb 12 22:10:24.124: INFO: stdout: "httpd-rc-eea78a72e9bb3586c533895e21e1d7ad-5bwzw "
Feb 12 22:10:24.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods httpd-rc-eea78a72e9bb3586c533895e21e1d7ad-5bwzw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4612'
Feb 12 22:10:24.210: INFO: stderr: ""
Feb 12 22:10:24.210: INFO: stdout: "true"
Feb 12 22:10:24.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods httpd-rc-eea78a72e9bb3586c533895e21e1d7ad-5bwzw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4612'
Feb 12 22:10:24.295: INFO: stderr: ""
Feb 12 22:10:24.295: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 12 22:10:24.295: INFO: httpd-rc-eea78a72e9bb3586c533895e21e1d7ad-5bwzw is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1556
Feb 12 22:10:24.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc httpd-rc --namespace=kubectl-4612'
Feb 12 22:10:24.402: INFO: stderr: ""
Feb 12 22:10:24.402: INFO: stdout: "replicationcontroller \"httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:10:24.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4612" for this suite.

• [SLOW TEST:26.300 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1542
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":277,"completed":255,"skipped":4274,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:10:24.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Feb 12 22:10:35.482: INFO: Successfully updated pod "adopt-release-sp47s"
STEP: Checking that the Job readopts the Pod
Feb 12 22:10:35.482: INFO: Waiting up to 15m0s for pod "adopt-release-sp47s" in namespace "job-9378" to be "adopted"
Feb 12 22:10:35.487: INFO: Pod "adopt-release-sp47s": Phase="Running", Reason="", readiness=true. Elapsed: 4.504829ms
Feb 12 22:10:37.493: INFO: Pod "adopt-release-sp47s": Phase="Running", Reason="", readiness=true. Elapsed: 2.010816458s
Feb 12 22:10:37.493: INFO: Pod "adopt-release-sp47s" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Feb 12 22:10:38.006: INFO: Successfully updated pod "adopt-release-sp47s"
STEP: Checking that the Job releases the Pod
Feb 12 22:10:38.006: INFO: Waiting up to 15m0s for pod "adopt-release-sp47s" in namespace "job-9378" to be "released"
Feb 12 22:10:38.051: INFO: Pod "adopt-release-sp47s": Phase="Running", Reason="", readiness=true. Elapsed: 44.099035ms
Feb 12 22:10:40.056: INFO: Pod "adopt-release-sp47s": Phase="Running", Reason="", readiness=true. Elapsed: 2.049318189s
Feb 12 22:10:40.056: INFO: Pod "adopt-release-sp47s" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:10:40.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9378" for this suite.

• [SLOW TEST:15.609 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":277,"completed":256,"skipped":4279,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:10:40.072: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:10:40.280: INFO: Create a RollingUpdate DaemonSet
Feb 12 22:10:40.284: INFO: Check that daemon pods launch on every node of the cluster
Feb 12 22:10:40.380: INFO: Number of nodes with available pods: 0
Feb 12 22:10:40.380: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:41.400: INFO: Number of nodes with available pods: 0
Feb 12 22:10:41.400: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:42.702: INFO: Number of nodes with available pods: 0
Feb 12 22:10:42.703: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:43.440: INFO: Number of nodes with available pods: 0
Feb 12 22:10:43.440: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:44.392: INFO: Number of nodes with available pods: 0
Feb 12 22:10:44.392: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:46.495: INFO: Number of nodes with available pods: 0
Feb 12 22:10:46.495: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:47.784: INFO: Number of nodes with available pods: 0
Feb 12 22:10:47.784: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:49.300: INFO: Number of nodes with available pods: 0
Feb 12 22:10:49.300: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:49.680: INFO: Number of nodes with available pods: 0
Feb 12 22:10:49.681: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:50.411: INFO: Number of nodes with available pods: 0
Feb 12 22:10:50.411: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:51.899: INFO: Number of nodes with available pods: 1
Feb 12 22:10:51.899: INFO: Node jerma-node is running more than one daemon pod
Feb 12 22:10:53.146: INFO: Number of nodes with available pods: 2
Feb 12 22:10:53.146: INFO: Number of running nodes: 2, number of available pods: 2
Feb 12 22:10:53.146: INFO: Update the DaemonSet to trigger a rollout
Feb 12 22:10:53.158: INFO: Updating DaemonSet daemon-set
Feb 12 22:11:03.217: INFO: Roll back the DaemonSet before rollout is complete
Feb 12 22:11:03.227: INFO: Updating DaemonSet daemon-set
Feb 12 22:11:03.227: INFO: Make sure DaemonSet rollback is complete
Feb 12 22:11:03.232: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:03.232: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:04.246: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:04.246: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:05.249: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:05.249: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:06.249: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:06.249: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:07.248: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:07.248: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:08.246: INFO: Wrong image for pod: daemon-set-86wrg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Feb 12 22:11:08.246: INFO: Pod daemon-set-86wrg is not available
Feb 12 22:11:09.247: INFO: Pod daemon-set-kjc4h is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1924, will wait for the garbage collector to delete the pods
Feb 12 22:11:09.324: INFO: Deleting DaemonSet.extensions daemon-set took: 9.042992ms
Feb 12 22:11:09.724: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.351503ms
Feb 12 22:11:16.047: INFO: Number of nodes with available pods: 0
Feb 12 22:11:16.047: INFO: Number of running nodes: 0, number of available pods: 0
Feb 12 22:11:16.050: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1924/daemonsets","resourceVersion":"8034989"},"items":null}

Feb 12 22:11:16.052: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1924/pods","resourceVersion":"8034989"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:11:16.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1924" for this suite.

• [SLOW TEST:36.088 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":277,"completed":257,"skipped":4309,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:11:16.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 12 22:11:27.518: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:11:28.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6432" for this suite.

• [SLOW TEST:12.406 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":277,"completed":258,"skipped":4319,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:11:28.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 22:11:29.174: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 22:11:31.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:33.233: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:35.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:39.445: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:41.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:43.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142289, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 22:11:46.229: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:11:46.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-996-crds.webhook.example.com via the AdmissionRegistration API
Feb 12 22:11:46.860: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:11:47.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5734" for this suite.
STEP: Destroying namespace "webhook-5734-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:19.472 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":277,"completed":259,"skipped":4339,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:11:48.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 22:11:48.913: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Feb 12 22:11:50.927: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142309, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:52.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142309, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:54.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142309, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:56.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142309, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:11:58.932: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142309, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142308, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 22:12:01.983: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:12:02.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8671" for this suite.
STEP: Destroying namespace "webhook-8671-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:14.198 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":277,"completed":260,"skipped":4352,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:12:02.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 12 22:12:09.177: INFO: 0 pods remaining
Feb 12 22:12:09.177: INFO: 0 pods has nil DeletionTimestamp
Feb 12 22:12:09.177: INFO: 
STEP: Gathering metrics
W0212 22:12:09.911651       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 22:12:09.911: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:12:09.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9287" for this suite.

• [SLOW TEST:7.709 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":277,"completed":261,"skipped":4355,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:12:09.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:12:10.256: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 31.456275ms)
Feb 12 22:12:10.263: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.818284ms)
Feb 12 22:12:10.270: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.114732ms)
Feb 12 22:12:10.277: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.613611ms)
Feb 12 22:12:10.287: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.837379ms)
Feb 12 22:12:10.452: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 165.031609ms)
Feb 12 22:12:10.469: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.367051ms)
Feb 12 22:12:10.589: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 119.845531ms)
Feb 12 22:12:10.609: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.114823ms)
Feb 12 22:12:10.645: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.473604ms)
Feb 12 22:12:10.657: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.898232ms)
Feb 12 22:12:10.788: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 130.370692ms)
Feb 12 22:12:10.801: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.157583ms)
Feb 12 22:12:10.843: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.902391ms)
Feb 12 22:12:10.892: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 48.681847ms)
Feb 12 22:12:10.934: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.236516ms)
Feb 12 22:12:10.953: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.131282ms)
Feb 12 22:12:10.959: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.680434ms)
Feb 12 22:12:10.965: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.630215ms)
Feb 12 22:12:10.971: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.234645ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:12:10.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4987" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":277,"completed":262,"skipped":4365,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:12:10.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Feb 12 22:12:11.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:12:28.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9720" for this suite.

• [SLOW TEST:17.468 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":277,"completed":263,"skipped":4374,"failed":1,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:12:28.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Feb 12 22:12:28.549: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb 12 22:12:28.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:28.981: INFO: stderr: ""
Feb 12 22:12:28.981: INFO: stdout: "service/agnhost-slave created\n"
Feb 12 22:12:28.981: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb 12 22:12:28.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:29.489: INFO: stderr: ""
Feb 12 22:12:29.489: INFO: stdout: "service/agnhost-master created\n"
Feb 12 22:12:29.490: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 12 22:12:29.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:29.903: INFO: stderr: ""
Feb 12 22:12:29.903: INFO: stdout: "service/frontend created\n"
Feb 12 22:12:29.904: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb 12 22:12:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:30.383: INFO: stderr: ""
Feb 12 22:12:30.384: INFO: stdout: "deployment.apps/frontend created\n"
Feb 12 22:12:30.384: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 12 22:12:30.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:32.132: INFO: stderr: ""
Feb 12 22:12:32.132: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb 12 22:12:32.132: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 12 22:12:32.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3830'
Feb 12 22:12:33.532: INFO: stderr: ""
Feb 12 22:12:33.533: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb 12 22:12:33.533: INFO: Waiting for all frontend pods to be Running.
Feb 12 22:12:58.585: INFO: Waiting for frontend to serve content.
Feb 12 22:12:58.613: INFO: Trying to add a new entry to the guestbook.
Feb 12 22:12:58.643: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:03.663: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:08.685: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:13.723: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:18.745: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:23.774: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:28.792: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:33.811: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:38.827: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:43.845: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:48.868: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:53.888: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:13:58.917: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:04.085: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:09.100: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:14.115: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:19.129: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:24.155: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:29.178: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:34.190: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:39.212: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:44.234: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:49.257: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:54.271: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:14:59.294: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:04.317: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:09.336: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:14.360: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:19.383: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:24.410: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:29.488: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:34.508: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:39.525: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:44.543: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:49.577: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:15:55.638: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 12 22:16:00.639: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x5596ea0, 0xc002060580, 0xc0048a9ae0, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2027 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:369 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001912f00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:111 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc001912f00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc001912f00, 0x4cf4ab0)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Feb 12 22:16:00.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:00.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:00.826: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 22:16:00.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:01.013: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:01.013: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 22:16:01.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:01.159: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:01.159: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 22:16:01.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:01.247: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:01.247: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 22:16:01.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:01.374: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:01.374: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 12 22:16:01.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3830'
Feb 12 22:16:01.503: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:16:01.503: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
STEP: Collecting events from namespace "kubectl-3830".
STEP: Found 33 events.
Feb 12 22:16:01.520: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-rvbmz: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/agnhost-master-74c46fb7d4-rvbmz to jerma-node
Feb 12 22:16:01.521: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-24tqb: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/agnhost-slave-774cfc759f-24tqb to jerma-node
Feb 12 22:16:01.521: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-qxqdq: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/agnhost-slave-774cfc759f-qxqdq to jerma-server-mvvl6gufaqub
Feb 12 22:16:01.521: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-fndkk: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/frontend-6c5f89d5d4-fndkk to jerma-node
Feb 12 22:16:01.521: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-jnh9v: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/frontend-6c5f89d5d4-jnh9v to jerma-node
Feb 12 22:16:01.521: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-ms8b7: {default-scheduler } Scheduled: Successfully assigned kubectl-3830/frontend-6c5f89d5d4-ms8b7 to jerma-server-mvvl6gufaqub
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:30 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:30 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-ms8b7
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:30 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-jnh9v
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:30 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-fndkk
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:33 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:33 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-rvbmz
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:33 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:34 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-qxqdq
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:34 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-24tqb
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:40 +0000 UTC - event for frontend-6c5f89d5d4-ms8b7: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:41 +0000 UTC - event for frontend-6c5f89d5d4-fndkk: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:42 +0000 UTC - event for frontend-6c5f89d5d4-jnh9v: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:44 +0000 UTC - event for agnhost-slave-774cfc759f-qxqdq: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:46 +0000 UTC - event for agnhost-master-74c46fb7d4-rvbmz: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:47 +0000 UTC - event for agnhost-slave-774cfc759f-24tqb: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:48 +0000 UTC - event for frontend-6c5f89d5d4-ms8b7: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:49 +0000 UTC - event for agnhost-slave-774cfc759f-qxqdq: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:51 +0000 UTC - event for agnhost-slave-774cfc759f-qxqdq: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:51 +0000 UTC - event for frontend-6c5f89d5d4-ms8b7: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:52 +0000 UTC - event for frontend-6c5f89d5d4-fndkk: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:52 +0000 UTC - event for frontend-6c5f89d5d4-jnh9v: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:53 +0000 UTC - event for agnhost-master-74c46fb7d4-rvbmz: {kubelet jerma-node} Created: Created container master
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:53 +0000 UTC - event for agnhost-slave-774cfc759f-24tqb: {kubelet jerma-node} Created: Created container slave
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:53 +0000 UTC - event for frontend-6c5f89d5d4-fndkk: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:53 +0000 UTC - event for frontend-6c5f89d5d4-jnh9v: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:54 +0000 UTC - event for agnhost-master-74c46fb7d4-rvbmz: {kubelet jerma-node} Started: Started container master
Feb 12 22:16:01.521: INFO: At 2020-02-12 22:12:54 +0000 UTC - event for agnhost-slave-774cfc759f-24tqb: {kubelet jerma-node} Started: Started container slave
Feb 12 22:16:01.541: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Feb 12 22:16:01.541: INFO: agnhost-master-74c46fb7d4-rvbmz  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:34 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:33 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: agnhost-slave-774cfc759f-24tqb   jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:34 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: agnhost-slave-774cfc759f-qxqdq   jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:35 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:34 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: frontend-6c5f89d5d4-fndkk        jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:30 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:30 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: frontend-6c5f89d5d4-jnh9v        jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:33 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:55 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:30 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: frontend-6c5f89d5d4-ms8b7        jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:32 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 22:12:30 +0000 UTC  }]
Feb 12 22:16:01.541: INFO: 
Feb 12 22:16:01.545: INFO: 
Logging node info for node jerma-node
Feb 12 22:16:01.574: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 8035933 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-12 22:14:58 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-12 22:14:58 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-12 22:14:58 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-12 22:14:58 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 12 22:16:01.577: INFO: 
Logging kubelet events for node jerma-node
Feb 12 22:16:01.675: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 12 22:16:01.755: INFO: agnhost-master-74c46fb7d4-rvbmz started at 2020-02-12 22:12:34 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container master ready: true, restart count 0
Feb 12 22:16:01.755: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 22:16:01.755: INFO: agnhost-slave-774cfc759f-24tqb started at 2020-02-12 22:12:35 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container slave ready: true, restart count 0
Feb 12 22:16:01.755: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container weave ready: true, restart count 1
Feb 12 22:16:01.755: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 22:16:01.755: INFO: frontend-6c5f89d5d4-fndkk started at 2020-02-12 22:12:30 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 12 22:16:01.755: INFO: frontend-6c5f89d5d4-jnh9v started at 2020-02-12 22:12:33 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.755: INFO: 	Container guestbook-frontend ready: true, restart count 0
W0212 22:16:01.798586       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 22:16:01.889: INFO: 
Latency metrics for node jerma-node
Feb 12 22:16:01.889: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 12 22:16:01.913: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 8035584 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-12 22:12:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-12 22:12:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-12 22:12:38 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-12 22:12:38 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 12 22:16:01.914: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 12 22:16:01.920: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 12 22:16:01.945: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container kube-controller-manager ready: true, restart count 6
Feb 12 22:16:01.945: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 12 22:16:01.945: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container weave ready: true, restart count 0
Feb 12 22:16:01.945: INFO: 	Container weave-npc ready: true, restart count 0
Feb 12 22:16:01.945: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container kube-scheduler ready: true, restart count 10
Feb 12 22:16:01.945: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 12 22:16:01.945: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container etcd ready: true, restart count 1
Feb 12 22:16:01.945: INFO: agnhost-slave-774cfc759f-qxqdq started at 2020-02-12 22:12:35 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container slave ready: true, restart count 0
Feb 12 22:16:01.945: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container coredns ready: true, restart count 0
Feb 12 22:16:01.945: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container coredns ready: true, restart count 0
Feb 12 22:16:01.945: INFO: frontend-6c5f89d5d4-ms8b7 started at 2020-02-12 22:12:32 +0000 UTC (0+1 container statuses recorded)
Feb 12 22:16:01.945: INFO: 	Container guestbook-frontend ready: true, restart count 0
W0212 22:16:01.954575       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 22:16:01.999: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 12 22:16:01.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3830" for this suite.

• Failure [214.569 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:337
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703

    Feb 12 22:16:00.639: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2027
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":277,"completed":263,"skipped":4378,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:16:03.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 12 22:16:06.783: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 12 22:16:08.878: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142566, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:16:10.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142566, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:16:12.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142566, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:16:14.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142567, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142566, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 12 22:16:18.256: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:16:18.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7303-crds.webhook.example.com via the AdmissionRegistration API
Feb 12 22:16:18.897: INFO: Waiting for webhook configuration to be ready...
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:16:19.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2676" for this suite.
STEP: Destroying namespace "webhook-2676-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.890 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":277,"completed":264,"skipped":4384,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:16:19.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Feb 12 22:16:20.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 12 22:16:20.222: INFO: stderr: ""
Feb 12 22:16:20.222: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:16:20.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2324" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":277,"completed":265,"skipped":4432,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:16:20.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 in namespace container-probe-2835
Feb 12 22:16:30.349: INFO: Started pod liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 in namespace container-probe-2835
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 22:16:30.352: INFO: Initial restart count of pod liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is 0
Feb 12 22:16:46.804: INFO: Restart count of pod container-probe-2835/liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is now 1 (16.452159173s elapsed)
Feb 12 22:17:06.912: INFO: Restart count of pod container-probe-2835/liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is now 2 (36.560259062s elapsed)
Feb 12 22:17:27.021: INFO: Restart count of pod container-probe-2835/liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is now 3 (56.669028707s elapsed)
Feb 12 22:17:47.091: INFO: Restart count of pod container-probe-2835/liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is now 4 (1m16.739387035s elapsed)
Feb 12 22:18:53.467: INFO: Restart count of pod container-probe-2835/liveness-407be8df-fcaf-4ad9-8b7e-60f68cde3250 is now 5 (2m23.114784508s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:18:53.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2835" for this suite.

• [SLOW TEST:153.347 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":277,"completed":266,"skipped":4432,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:18:53.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:18:53.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4207" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":277,"completed":267,"skipped":4452,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:18:53.885: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:229
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:281
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Feb 12 22:18:54.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-825'
Feb 12 22:18:54.544: INFO: stderr: ""
Feb 12 22:18:54.544: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 22:18:54.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:18:54.731: INFO: stderr: ""
Feb 12 22:18:54.731: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
Feb 12 22:18:54.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2q927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:18:54.820: INFO: stderr: ""
Feb 12 22:18:54.820: INFO: stdout: ""
Feb 12 22:18:54.820: INFO: update-demo-nautilus-2q927 is created but not running
Feb 12 22:18:59.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:00.603: INFO: stderr: ""
Feb 12 22:19:00.603: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
Feb 12 22:19:00.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2q927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:01.967: INFO: stderr: ""
Feb 12 22:19:01.968: INFO: stdout: ""
Feb 12 22:19:01.968: INFO: update-demo-nautilus-2q927 is created but not running
Feb 12 22:19:06.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:07.116: INFO: stderr: ""
Feb 12 22:19:07.116: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
Feb 12 22:19:07.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2q927 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:07.251: INFO: stderr: ""
Feb 12 22:19:07.251: INFO: stdout: "true"
Feb 12 22:19:07.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2q927 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:07.327: INFO: stderr: ""
Feb 12 22:19:07.327: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:07.327: INFO: validating pod update-demo-nautilus-2q927
Feb 12 22:19:07.336: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:07.336: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:07.336: INFO: update-demo-nautilus-2q927 is verified up and running
Feb 12 22:19:07.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:07.408: INFO: stderr: ""
Feb 12 22:19:07.408: INFO: stdout: "true"
Feb 12 22:19:07.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:07.514: INFO: stderr: ""
Feb 12 22:19:07.514: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:07.514: INFO: validating pod update-demo-nautilus-b7bgc
Feb 12 22:19:07.524: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:07.524: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:07.524: INFO: update-demo-nautilus-b7bgc is verified up and running
STEP: scaling down the replication controller
Feb 12 22:19:07.526: INFO: scanned /root for discovery docs: 
Feb 12 22:19:07.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-825'
Feb 12 22:19:08.671: INFO: stderr: ""
Feb 12 22:19:08.671: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 22:19:08.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:08.793: INFO: stderr: ""
Feb 12 22:19:08.793: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 22:19:13.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:13.936: INFO: stderr: ""
Feb 12 22:19:13.936: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 22:19:18.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:19.101: INFO: stderr: ""
Feb 12 22:19:19.101: INFO: stdout: "update-demo-nautilus-2q927 update-demo-nautilus-b7bgc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 12 22:19:24.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:24.197: INFO: stderr: ""
Feb 12 22:19:24.197: INFO: stdout: "update-demo-nautilus-b7bgc "
Feb 12 22:19:24.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:24.319: INFO: stderr: ""
Feb 12 22:19:24.319: INFO: stdout: "true"
Feb 12 22:19:24.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:24.397: INFO: stderr: ""
Feb 12 22:19:24.397: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:24.397: INFO: validating pod update-demo-nautilus-b7bgc
Feb 12 22:19:24.400: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:24.400: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:24.400: INFO: update-demo-nautilus-b7bgc is verified up and running
STEP: scaling up the replication controller
Feb 12 22:19:24.402: INFO: scanned /root for discovery docs: 
Feb 12 22:19:24.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-825'
Feb 12 22:19:25.555: INFO: stderr: ""
Feb 12 22:19:25.555: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 12 22:19:25.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:25.629: INFO: stderr: ""
Feb 12 22:19:25.629: INFO: stdout: "update-demo-nautilus-b7bgc update-demo-nautilus-ft8br "
Feb 12 22:19:25.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:25.752: INFO: stderr: ""
Feb 12 22:19:25.752: INFO: stdout: "true"
Feb 12 22:19:25.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:25.919: INFO: stderr: ""
Feb 12 22:19:25.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:25.919: INFO: validating pod update-demo-nautilus-b7bgc
Feb 12 22:19:25.922: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:25.923: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:25.923: INFO: update-demo-nautilus-b7bgc is verified up and running
Feb 12 22:19:25.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft8br -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:26.086: INFO: stderr: ""
Feb 12 22:19:26.086: INFO: stdout: ""
Feb 12 22:19:26.086: INFO: update-demo-nautilus-ft8br is created but not running
Feb 12 22:19:31.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-825'
Feb 12 22:19:31.229: INFO: stderr: ""
Feb 12 22:19:31.229: INFO: stdout: "update-demo-nautilus-b7bgc update-demo-nautilus-ft8br "
Feb 12 22:19:31.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:31.358: INFO: stderr: ""
Feb 12 22:19:31.358: INFO: stdout: "true"
Feb 12 22:19:31.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b7bgc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:31.443: INFO: stderr: ""
Feb 12 22:19:31.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:31.443: INFO: validating pod update-demo-nautilus-b7bgc
Feb 12 22:19:31.446: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:31.446: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:31.446: INFO: update-demo-nautilus-b7bgc is verified up and running
Feb 12 22:19:31.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft8br -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:31.559: INFO: stderr: ""
Feb 12 22:19:31.559: INFO: stdout: "true"
Feb 12 22:19:31.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ft8br -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-825'
Feb 12 22:19:31.664: INFO: stderr: ""
Feb 12 22:19:31.664: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 12 22:19:31.664: INFO: validating pod update-demo-nautilus-ft8br
Feb 12 22:19:31.673: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 12 22:19:31.673: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 12 22:19:31.673: INFO: update-demo-nautilus-ft8br is verified up and running
STEP: using delete to clean up resources
Feb 12 22:19:31.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-825'
Feb 12 22:19:31.775: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 12 22:19:31.775: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 12 22:19:31.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-825'
Feb 12 22:19:32.022: INFO: stderr: "No resources found in kubectl-825 namespace.\n"
Feb 12 22:19:32.022: INFO: stdout: ""
Feb 12 22:19:32.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-825 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 12 22:19:32.211: INFO: stderr: ""
Feb 12 22:19:32.211: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:19:32.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-825" for this suite.

• [SLOW TEST:38.337 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":277,"completed":268,"skipped":4456,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:19:32.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:19:32.317: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 12 22:19:32.327: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 12 22:19:37.455: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 12 22:19:45.472: INFO: Creating deployment "test-rolling-update-deployment"
Feb 12 22:19:45.479: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 12 22:19:45.529: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 12 22:19:47.541: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 12 22:19:47.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:19:49.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:19:51.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142785, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:19:53.552: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 12 22:19:53.564: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-8272 /apis/apps/v1/namespaces/deployment-8272/deployments/test-rolling-update-deployment 79f4eba9-b42e-4eef-9c50-4eaf3e900b4f 8036956 1 2020-02-12 22:19:45 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048283f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-12 22:19:45 +0000 UTC,LastTransitionTime:2020-02-12 22:19:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-12 22:19:52 +0000 UTC,LastTransitionTime:2020-02-12 22:19:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 12 22:19:53.569: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-8272 /apis/apps/v1/namespaces/deployment-8272/replicasets/test-rolling-update-deployment-67cf4f6444 e95fbd2f-3a55-47c3-bd29-c07ffbeefc94 8036942 1 2020-02-12 22:19:45 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 79f4eba9-b42e-4eef-9c50-4eaf3e900b4f 0xc004828897 0xc004828898}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004828908  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:19:53.569: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 12 22:19:53.569: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-8272 /apis/apps/v1/namespaces/deployment-8272/replicasets/test-rolling-update-controller 20deef92-1dc6-4935-bf1c-b84174b9c8bf 8036954 2 2020-02-12 22:19:32 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 79f4eba9-b42e-4eef-9c50-4eaf3e900b4f 0xc0048287c7 0xc0048287c8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004828828  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:19:53.573: INFO: Pod "test-rolling-update-deployment-67cf4f6444-9zvpc" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-9zvpc test-rolling-update-deployment-67cf4f6444- deployment-8272 /api/v1/namespaces/deployment-8272/pods/test-rolling-update-deployment-67cf4f6444-9zvpc 1808f9ad-c625-4d72-84fd-867ae7f4d0df 8036941 0 2020-02-12 22:19:45 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e95fbd2f-3a55-47c3-bd29-c07ffbeefc94 0xc000e921f7 0xc000e921f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qwfwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qwfwm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qwfwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:19:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:19:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:19:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-12 22:19:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-12 22:19:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://4c5f7f8f52de522ca550b0273c501355cb5d0cc221ad2a0bb5fc0378701dd496,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:19:53.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8272" for this suite.

• [SLOW TEST:21.381 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":277,"completed":269,"skipped":4462,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:19:53.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-3c2ef73b-5eeb-4fb4-b89b-8fdff2657289 in namespace container-probe-4571
Feb 12 22:20:01.727: INFO: Started pod busybox-3c2ef73b-5eeb-4fb4-b89b-8fdff2657289 in namespace container-probe-4571
STEP: checking the pod's current state and verifying that restartCount is present
Feb 12 22:20:01.730: INFO: Initial restart count of pod busybox-3c2ef73b-5eeb-4fb4-b89b-8fdff2657289 is 0
Feb 12 22:20:55.935: INFO: Restart count of pod container-probe-4571/busybox-3c2ef73b-5eeb-4fb4-b89b-8fdff2657289 is now 1 (54.204657056s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:20:55.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4571" for this suite.

• [SLOW TEST:62.390 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":277,"completed":270,"skipped":4466,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:20:55.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 12 22:20:57.368: INFO: Waiting up to 5m0s for pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e" in namespace "emptydir-923" to be "Succeeded or Failed"
Feb 12 22:20:57.395: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e": Phase="Pending", Reason="", readiness=false. Elapsed: 26.488329ms
Feb 12 22:20:59.413: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045210227s
Feb 12 22:21:01.437: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069161041s
Feb 12 22:21:03.443: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075155992s
Feb 12 22:21:05.449: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081046463s
STEP: Saw pod success
Feb 12 22:21:05.449: INFO: Pod "pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e" satisfied condition "Succeeded or Failed"
Feb 12 22:21:05.454: INFO: Trying to get logs from node jerma-node pod pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e container test-container: 
STEP: delete the pod
Feb 12 22:21:05.504: INFO: Waiting for pod pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e to disappear
Feb 12 22:21:05.510: INFO: Pod pod-a3aee708-7ef8-4db7-b8a3-2c9078f5934e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:21:05.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-923" for this suite.

• [SLOW TEST:9.539 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":271,"skipped":4472,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:21:05.535: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-55ca958d-2dae-4238-abb5-7c99d9c743f2
STEP: Creating a pod to test consume configMaps
Feb 12 22:21:05.794: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6" in namespace "projected-2376" to be "Succeeded or Failed"
Feb 12 22:21:05.799: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.648511ms
Feb 12 22:21:07.816: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022068801s
Feb 12 22:21:09.862: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068145027s
Feb 12 22:21:11.937: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143284488s
Feb 12 22:21:13.945: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151295779s
STEP: Saw pod success
Feb 12 22:21:13.945: INFO: Pod "pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6" satisfied condition "Succeeded or Failed"
Feb 12 22:21:13.949: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 12 22:21:14.003: INFO: Waiting for pod pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6 to disappear
Feb 12 22:21:14.021: INFO: Pod pod-projected-configmaps-975d87f9-c2b6-4521-99bd-177a6d9e2ac6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:21:14.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2376" for this suite.

• [SLOW TEST:8.498 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":277,"completed":272,"skipped":4502,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:21:14.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Feb 12 22:21:14.148: INFO: Creating deployment "test-recreate-deployment"
Feb 12 22:21:14.154: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 12 22:21:14.183: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 12 22:21:16.194: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 12 22:21:16.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:21:18.290: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:21:20.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717142874, loc:(*time.Location)(0x7eef300)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 12 22:21:22.204: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 12 22:21:22.249: INFO: Updating deployment test-recreate-deployment
Feb 12 22:21:22.249: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 12 22:21:23.092: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3785 /apis/apps/v1/namespaces/deployment-3785/deployments/test-recreate-deployment b2b9f0aa-b782-4748-8297-85a5c7b8724d 8037313 2 2020-02-12 22:21:14 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022264b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-12 22:21:22 +0000 UTC,LastTransitionTime:2020-02-12 22:21:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-12 22:21:22 +0000 UTC,LastTransitionTime:2020-02-12 22:21:14 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 12 22:21:23.095: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3785 /apis/apps/v1/namespaces/deployment-3785/replicasets/test-recreate-deployment-5f94c574ff 7c3a1992-b115-4c4c-9978-ec4691157661 8037311 1 2020-02-12 22:21:22 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b2b9f0aa-b782-4748-8297-85a5c7b8724d 0xc002226a37 0xc002226a38}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002226a98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:21:23.096: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 12 22:21:23.096: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3785 /apis/apps/v1/namespaces/deployment-3785/replicasets/test-recreate-deployment-799c574856 07d4451c-ebaf-4960-8b8a-e23e5ea74f40 8037301 2 2020-02-12 22:21:14 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b2b9f0aa-b782-4748-8297-85a5c7b8724d 0xc002226b07 0xc002226b08}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002226b78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 12 22:21:23.099: INFO: Pod "test-recreate-deployment-5f94c574ff-qqwbs" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-qqwbs test-recreate-deployment-5f94c574ff- deployment-3785 /api/v1/namespaces/deployment-3785/pods/test-recreate-deployment-5f94c574ff-qqwbs cc83f422-27cc-4a1a-bf9d-12fb64a68d31 8037308 0 2020-02-12 22:21:22 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 7c3a1992-b115-4c4c-9978-ec4691157661 0xc0048286d7 0xc0048286d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vtxkf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vtxkf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vtxkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-12 22:21:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:21:23.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3785" for this suite.

• [SLOW TEST:9.075 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":277,"completed":273,"skipped":4535,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:21:23.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0212 22:21:39.147351       9 metrics_grabber.go:80] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 12 22:21:39.147: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:21:39.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4795" for this suite.

• [SLOW TEST:16.314 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":277,"completed":274,"skipped":4536,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Feb 12 22:21:39.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-1722
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 12 22:21:42.617: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 12 22:21:46.417: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:49.080: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:50.694: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:52.644: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:54.485: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:56.695: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:21:59.548: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:01.496: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:02.422: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:04.766: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:06.935: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:08.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 12 22:22:10.424: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:12.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:14.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:16.424: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:18.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:20.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:22.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 12 22:22:24.422: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 12 22:22:24.428: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 12 22:22:32.507: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1722 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:22:32.508: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:22:32.568742       9 log.go:172] (0xc005f26000) (0xc001730640) Create stream
I0212 22:22:32.568817       9 log.go:172] (0xc005f26000) (0xc001730640) Stream added, broadcasting: 1
I0212 22:22:32.573121       9 log.go:172] (0xc005f26000) Reply frame received for 1
I0212 22:22:32.573154       9 log.go:172] (0xc005f26000) (0xc000a7c320) Create stream
I0212 22:22:32.573165       9 log.go:172] (0xc005f26000) (0xc000a7c320) Stream added, broadcasting: 3
I0212 22:22:32.577316       9 log.go:172] (0xc005f26000) Reply frame received for 3
I0212 22:22:32.577341       9 log.go:172] (0xc005f26000) (0xc000a7c6e0) Create stream
I0212 22:22:32.577351       9 log.go:172] (0xc005f26000) (0xc000a7c6e0) Stream added, broadcasting: 5
I0212 22:22:32.581022       9 log.go:172] (0xc005f26000) Reply frame received for 5
I0212 22:22:32.689467       9 log.go:172] (0xc005f26000) Data frame received for 3
I0212 22:22:32.689513       9 log.go:172] (0xc000a7c320) (3) Data frame handling
I0212 22:22:32.689527       9 log.go:172] (0xc000a7c320) (3) Data frame sent
I0212 22:22:32.756312       9 log.go:172] (0xc005f26000) (0xc000a7c320) Stream removed, broadcasting: 3
I0212 22:22:32.756503       9 log.go:172] (0xc005f26000) Data frame received for 1
I0212 22:22:32.756530       9 log.go:172] (0xc001730640) (1) Data frame handling
I0212 22:22:32.756553       9 log.go:172] (0xc001730640) (1) Data frame sent
I0212 22:22:32.756573       9 log.go:172] (0xc005f26000) (0xc000a7c6e0) Stream removed, broadcasting: 5
I0212 22:22:32.756684       9 log.go:172] (0xc005f26000) (0xc001730640) Stream removed, broadcasting: 1
I0212 22:22:32.756727       9 log.go:172] (0xc005f26000) Go away received
I0212 22:22:32.756879       9 log.go:172] (0xc005f26000) (0xc001730640) Stream removed, broadcasting: 1
I0212 22:22:32.756937       9 log.go:172] (0xc005f26000) (0xc000a7c320) Stream removed, broadcasting: 3
I0212 22:22:32.756962       9 log.go:172] (0xc005f26000) (0xc000a7c6e0) Stream removed, broadcasting: 5
Feb 12 22:22:32.756: INFO: Found all expected endpoints: [netserver-0]
Feb 12 22:22:32.760: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.6:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1722 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 12 22:22:32.760: INFO: >>> kubeConfig: /root/.kube/config
I0212 22:22:32.798691       9 log.go:172] (0xc002844630) (0xc001be05a0) Create stream
I0212 22:22:32.798768       9 log.go:172] (0xc002844630) (0xc001be05a0) Stream added, broadcasting: 1
I0212 22:22:32.804476       9 log.go:172] (0xc002844630) Reply frame received for 1
I0212 22:22:32.804550       9 log.go:172] (0xc002844630) (0xc00193a1e0) Create stream
I0212 22:22:32.804567       9 log.go:172] (0xc002844630) (0xc00193a1e0) Stream added, broadcasting: 3
I0212 22:22:32.806258       9 log.go:172] (0xc002844630) Reply frame received for 3
I0212 22:22:32.806298       9 log.go:172] (0xc002844630) (0xc0014acdc0) Create stream
I0212 22:22:32.806315       9 log.go:172] (0xc002844630) (0xc0014acdc0) Stream added, broadcasting: 5
I0212 22:22:32.808608       9 log.go:172] (0xc002844630) Reply frame received for 5
I0212 22:22:32.917946       9 log.go:172] (0xc002844630) Data frame received for 3
I0212 22:22:32.918024       9 log.go:172] (0xc00193a1e0) (3) Data frame handling
I0212 22:22:32.918052       9 log.go:172] (0xc00193a1e0) (3) Data frame sent
I0212 22:22:32.996629       9 log.go:172] (0xc002844630) (0xc00193a1e0) Stream removed, broadcasting: 3
I0212 22:22:32.996763       9 log.go:172] (0xc002844630) Data frame received for 1
I0212 22:22:32.996800       9 log.go:172] (0xc002844630) (0xc0014acdc0) Stream removed, broadcasting: 5
I0212 22:22:32.996832       9 log.go:172] (0xc001be05a0) (1) Data frame handling
I0212 22:22:32.997026       9 log.go:172] (0xc001be05a0) (1) Data frame sent
I0212 22:22:32.997047       9 log.go:172] (0xc002844630) (0xc001be05a0) Stream removed, broadcasting: 1
I0212 22:22:32.997076       9 log.go:172] (0xc002844630) Go away received
I0212 22:22:32.997293       9 log.go:172] (0xc002844630) (0xc001be05a0) Stream removed, broadcasting: 1
I0212 22:22:32.997325       9 log.go:172] (0xc002844630) (0xc00193a1e0) Stream removed, broadcasting: 3
I0212 22:22:32.997344       9 log.go:172] (0xc002844630) (0xc0014acdc0) Stream removed, broadcasting: 5
Feb 12 22:22:32.997: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Feb 12 22:22:32.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1722" for this suite.

• [SLOW TEST:53.588 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":277,"completed":275,"skipped":4552,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSFeb 12 22:22:33.014: INFO: Running AfterSuite actions on all nodes
Feb 12 22:22:33.014: INFO: Running AfterSuite actions on node 1
Feb 12 22:22:33.014: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":277,"completed":275,"skipped":4564,"failed":2,"failures":["[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 2 Failures:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:762

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2027

Ran 277 of 4841 Specs in 6939.241 seconds
FAIL! -- 275 Passed | 2 Failed | 0 Pending | 4564 Skipped
--- FAIL: TestE2E (6939.37s)
FAIL