I0127 23:39:06.846648 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0127 23:39:06.847201 9 e2e.go:109] Starting e2e run "3bea6878-9807-4fe5-b87e-742f73226a44" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1580168345 - Will randomize all specs Will run 280 of 4845 specs Jan 27 23:39:06.916: INFO: >>> kubeConfig: /root/.kube/config Jan 27 23:39:06.919: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 27 23:39:06.948: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 27 23:39:06.997: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 27 23:39:06.997: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 27 23:39:06.997: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 27 23:39:07.007: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 27 23:39:07.007: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 27 23:39:07.007: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Jan 27 23:39:07.009: INFO: kube-apiserver version: v1.17.0 Jan 27 23:39:07.009: INFO: >>> kubeConfig: /root/.kube/config Jan 27 23:39:07.014: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:39:07.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Jan 27 23:39:07.075: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2206 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-2206 Jan 27 23:39:07.181: INFO: Found 0 stateful pods, waiting for 1 Jan 27 23:39:17.190: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jan 27 23:39:17.233: INFO: Deleting all statefulset in ns statefulset-2206 Jan 27 23:39:17.337: INFO: Scaling statefulset ss to 0 Jan 27 23:39:37.414: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 23:39:37.417: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:39:37.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2206" for this suite. • [SLOW TEST:30.463 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":1,"skipped":26,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:39:37.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6351 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6351 STEP: creating replication controller externalsvc in namespace services-6351 I0127 23:39:37.753777 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6351, replica count: 2 I0127 23:39:40.804555 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 23:39:43.805259 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 23:39:46.805977 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jan 27 23:39:46.855: INFO: Creating new exec pod Jan 27 23:39:52.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6351 execpodg2n7j -- /bin/sh -x -c nslookup clusterip-service' Jan 27 23:39:55.612: INFO: stderr: "I0127 23:39:55.423530 31 log.go:172] (0xc000878b00) (0xc000685ea0) Create stream\nI0127 23:39:55.424249 31 log.go:172] (0xc000878b00) (0xc000685ea0) Stream added, broadcasting: 1\nI0127 23:39:55.430788 31 log.go:172] (0xc000878b00) Reply frame received for 1\nI0127 23:39:55.430851 31 log.go:172] (0xc000878b00) (0xc000685f40) Create stream\nI0127 23:39:55.430879 31 log.go:172] (0xc000878b00) (0xc000685f40) Stream added, broadcasting: 3\nI0127 23:39:55.432831 31 log.go:172] (0xc000878b00) Reply frame received for 3\nI0127 23:39:55.432891 31 log.go:172] (0xc000878b00) (0xc0005c06e0) Create stream\nI0127 23:39:55.432903 31 log.go:172] (0xc000878b00) (0xc0005c06e0) Stream added, broadcasting: 5\nI0127 23:39:55.434915 31 log.go:172] (0xc000878b00) Reply frame received for 5\nI0127 23:39:55.513801 31 log.go:172] (0xc000878b00) Data frame received for 5\nI0127 23:39:55.514128 31 log.go:172] (0xc0005c06e0) (5) Data frame handling\nI0127 23:39:55.514235 31 log.go:172] (0xc0005c06e0) (5) Data frame sent\n+ nslookup clusterip-service\nI0127 23:39:55.532294 31 log.go:172] (0xc000878b00) Data frame received for 3\nI0127 23:39:55.532393 31 log.go:172] (0xc000685f40) (3) Data frame handling\nI0127 23:39:55.532435 31 log.go:172] (0xc000685f40) (3) Data frame sent\nI0127 23:39:55.532998 31 log.go:172] (0xc000878b00) Data frame received for 3\nI0127 23:39:55.533015 31 log.go:172] (0xc000685f40) (3) Data frame handling\nI0127 23:39:55.533027 31 log.go:172] (0xc000685f40) (3) Data frame sent\nI0127 23:39:55.600414 31 log.go:172] (0xc000878b00) Data frame received for 1\nI0127 23:39:55.600506 31 log.go:172] (0xc000878b00) (0xc000685f40) Stream removed, broadcasting: 3\nI0127 23:39:55.600572 31 log.go:172] (0xc000685ea0) (1) Data frame handling\nI0127 23:39:55.600605 31 log.go:172] (0xc000685ea0) (1) Data frame sent\nI0127 23:39:55.600743 31 log.go:172] (0xc000878b00) (0xc0005c06e0) Stream removed, broadcasting: 5\nI0127 23:39:55.600782 31 log.go:172] (0xc000878b00) (0xc000685ea0) Stream removed, broadcasting: 1\nI0127 23:39:55.600803 31 log.go:172] (0xc000878b00) Go away received\nI0127 23:39:55.601475 31 log.go:172] (0xc000878b00) (0xc000685ea0) Stream removed, broadcasting: 1\nI0127 23:39:55.601490 31 log.go:172] (0xc000878b00) (0xc000685f40) Stream removed, broadcasting: 3\nI0127 23:39:55.601501 31 log.go:172] (0xc000878b00) (0xc0005c06e0) Stream removed, broadcasting: 5\n" Jan 27 23:39:55.612: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6351.svc.cluster.local\tcanonical name = externalsvc.services-6351.svc.cluster.local.\nName:\texternalsvc.services-6351.svc.cluster.local\nAddress: 10.96.111.9\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6351, will wait for the garbage collector to delete the pods Jan 27 23:39:55.675: INFO: Deleting ReplicationController externalsvc took: 8.26574ms Jan 27 23:39:55.776: INFO: Terminating ReplicationController externalsvc pods took: 100.341919ms Jan 27 23:40:12.550: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:40:12.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6351" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:35.126 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":2,"skipped":55,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:40:12.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 27 23:40:12.756: INFO: Waiting up to 5m0s for pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033" in namespace "downward-api-9741" to be "success or failure" Jan 27 23:40:12.804: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033": Phase="Pending", Reason="", readiness=false. Elapsed: 47.947572ms Jan 27 23:40:14.822: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065680486s Jan 27 23:40:16.832: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075311687s Jan 27 23:40:18.840: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084053445s Jan 27 23:40:20.846: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089774682s STEP: Saw pod success Jan 27 23:40:20.846: INFO: Pod "downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033" satisfied condition "success or failure" Jan 27 23:40:20.854: INFO: Trying to get logs from node jerma-node pod downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033 container dapi-container: STEP: delete the pod Jan 27 23:40:21.038: INFO: Waiting for pod downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033 to disappear Jan 27 23:40:21.054: INFO: Pod downward-api-fe01f15e-bc6c-4d1e-9d1e-afe77ae6b033 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:40:21.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9741" for this suite. • [SLOW TEST:8.465 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":3,"skipped":56,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:40:21.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 27 23:40:21.341: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:40:23.348: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:40:25.347: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:40:27.349: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Pending, waiting for it to be Running (with Ready = true) Jan 27 23:40:29.346: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:31.348: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:33.349: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:35.348: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:37.347: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:39.347: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:41.350: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:43.346: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = false) Jan 27 23:40:45.347: INFO: The status of Pod test-webserver-57835b3e-67d2-4dde-8481-5ffecdc7b0ad is Running (Ready = true) Jan 27 23:40:45.353: INFO: Container started at 2020-01-27 23:40:26 +0000 UTC, pod became ready at 2020-01-27 23:40:44 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:40:45.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2830" for this suite. • [SLOW TEST:24.301 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":4,"skipped":58,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:40:45.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4332 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-4332 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4332 Jan 27 23:40:45.565: INFO: Found 0 stateful pods, waiting for 1 Jan 27 23:40:55.574: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 27 23:40:55.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 23:40:56.051: INFO: stderr: "I0127 23:40:55.795080 58 log.go:172] (0xc00038c6e0) (0xc0007455e0) Create stream\nI0127 23:40:55.795514 58 log.go:172] (0xc00038c6e0) (0xc0007455e0) Stream added, broadcasting: 1\nI0127 23:40:55.802583 58 log.go:172] (0xc00038c6e0) Reply frame received for 1\nI0127 23:40:55.802650 58 log.go:172] (0xc00038c6e0) (0xc00085c000) Create stream\nI0127 23:40:55.802665 58 log.go:172] (0xc00038c6e0) (0xc00085c000) Stream added, broadcasting: 3\nI0127 23:40:55.804515 58 log.go:172] (0xc00038c6e0) Reply frame received for 3\nI0127 23:40:55.804544 58 log.go:172] (0xc00038c6e0) (0xc000910000) Create stream\nI0127 23:40:55.804556 58 log.go:172] (0xc00038c6e0) (0xc000910000) Stream added, broadcasting: 5\nI0127 23:40:55.806330 58 log.go:172] (0xc00038c6e0) Reply frame received for 5\nI0127 23:40:55.908354 58 log.go:172] (0xc00038c6e0) Data frame received for 5\nI0127 23:40:55.908517 58 log.go:172] (0xc000910000) (5) Data frame handling\nI0127 23:40:55.908543 58 log.go:172] (0xc000910000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0127 23:40:55.936646 58 log.go:172] (0xc00038c6e0) Data frame received for 3\nI0127 23:40:55.936843 58 log.go:172] (0xc00085c000) (3) Data frame handling\nI0127 23:40:55.936938 58 log.go:172] (0xc00085c000) (3) Data frame sent\nI0127 23:40:56.033624 58 log.go:172] (0xc00038c6e0) Data frame received for 1\nI0127 23:40:56.033698 58 log.go:172] (0xc00038c6e0) (0xc00085c000) Stream removed, broadcasting: 3\nI0127 23:40:56.033783 58 log.go:172] (0xc0007455e0) (1) Data frame handling\nI0127 23:40:56.033799 58 log.go:172] (0xc0007455e0) (1) Data frame sent\nI0127 23:40:56.033808 58 log.go:172] (0xc00038c6e0) (0xc0007455e0) Stream removed, broadcasting: 1\nI0127 23:40:56.034782 58 log.go:172] (0xc00038c6e0) (0xc000910000) Stream removed, broadcasting: 5\nI0127 23:40:56.035049 58 log.go:172] (0xc00038c6e0) Go away received\nI0127 23:40:56.035093 58 log.go:172] (0xc00038c6e0) (0xc0007455e0) Stream removed, broadcasting: 1\nI0127 23:40:56.035138 58 log.go:172] (0xc00038c6e0) (0xc00085c000) Stream removed, broadcasting: 3\nI0127 23:40:56.035156 58 log.go:172] (0xc00038c6e0) (0xc000910000) Stream removed, broadcasting: 5\n" Jan 27 23:40:56.051: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 23:40:56.051: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 23:40:56.056: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 27 23:41:06.065: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 23:41:06.065: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 23:41:06.093: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:06.093: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:06.093: INFO: Jan 27 23:41:06.093: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 27 23:41:07.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990350957s Jan 27 23:41:09.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.093490935s Jan 27 23:41:10.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976482704s Jan 27 23:41:11.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970427758s Jan 27 23:41:12.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.962839678s Jan 27 23:41:14.147: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.370932685s Jan 27 23:41:15.156: INFO: Verifying statefulset ss doesn't scale past 3 for another 936.86443ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4332 Jan 27 23:41:16.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:41:16.671: INFO: stderr: "I0127 23:41:16.434499 77 log.go:172] (0xc000a12160) (0xc0003b55e0) Create stream\nI0127 23:41:16.434662 77 log.go:172] (0xc000a12160) (0xc0003b55e0) Stream added, broadcasting: 1\nI0127 23:41:16.440505 77 log.go:172] (0xc000a12160) Reply frame received for 1\nI0127 23:41:16.440550 77 log.go:172] (0xc000a12160) (0xc0009e8000) Create stream\nI0127 23:41:16.440560 77 log.go:172] (0xc000a12160) (0xc0009e8000) Stream added, broadcasting: 3\nI0127 23:41:16.443240 77 log.go:172] (0xc000a12160) Reply frame received for 3\nI0127 23:41:16.443273 77 log.go:172] (0xc000a12160) (0xc0009e80a0) Create stream\nI0127 23:41:16.443281 77 log.go:172] (0xc000a12160) (0xc0009e80a0) Stream added, broadcasting: 5\nI0127 23:41:16.445359 77 log.go:172] (0xc000a12160) Reply frame received for 5\nI0127 23:41:16.571884 77 log.go:172] (0xc000a12160) Data frame received for 3\nI0127 23:41:16.572114 77 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0127 23:41:16.572219 77 log.go:172] (0xc000a12160) Data frame received for 5\nI0127 23:41:16.572334 77 log.go:172] (0xc0009e80a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0127 23:41:16.572485 77 log.go:172] (0xc0009e8000) (3) Data frame sent\nI0127 23:41:16.572589 77 log.go:172] (0xc0009e80a0) (5) Data frame sent\nI0127 23:41:16.656884 77 log.go:172] (0xc000a12160) Data frame received for 1\nI0127 23:41:16.657129 77 log.go:172] (0xc0003b55e0) (1) Data frame handling\nI0127 23:41:16.657181 77 log.go:172] (0xc0003b55e0) (1) Data frame sent\nI0127 23:41:16.657666 77 log.go:172] (0xc000a12160) (0xc0003b55e0) Stream removed, broadcasting: 1\nI0127 23:41:16.658017 77 log.go:172] (0xc000a12160) (0xc0009e8000) Stream removed, broadcasting: 3\nI0127 23:41:16.658100 77 log.go:172] (0xc000a12160) (0xc0009e80a0) Stream removed, broadcasting: 5\nI0127 23:41:16.658151 77 log.go:172] (0xc000a12160) Go away received\nI0127 23:41:16.658794 77 log.go:172] (0xc000a12160) (0xc0003b55e0) Stream removed, broadcasting: 1\nI0127 23:41:16.658810 77 log.go:172] (0xc000a12160) (0xc0009e8000) Stream removed, broadcasting: 3\nI0127 23:41:16.658818 77 log.go:172] (0xc000a12160) (0xc0009e80a0) Stream removed, broadcasting: 5\n" Jan 27 23:41:16.671: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 23:41:16.671: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 23:41:16.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:41:17.103: INFO: stderr: "I0127 23:41:16.844259 99 log.go:172] (0xc0009d26e0) (0xc000968000) Create stream\nI0127 23:41:16.844681 99 log.go:172] (0xc0009d26e0) (0xc000968000) Stream added, broadcasting: 1\nI0127 23:41:16.853271 99 log.go:172] (0xc0009d26e0) Reply frame received for 1\nI0127 23:41:16.853789 99 log.go:172] (0xc0009d26e0) (0xc0009bc000) Create stream\nI0127 23:41:16.854320 99 log.go:172] (0xc0009d26e0) (0xc0009bc000) Stream added, broadcasting: 3\nI0127 23:41:16.877359 99 log.go:172] (0xc0009d26e0) Reply frame received for 3\nI0127 23:41:16.877758 99 log.go:172] (0xc0009d26e0) (0xc00065db80) Create stream\nI0127 23:41:16.877814 99 log.go:172] (0xc0009d26e0) (0xc00065db80) Stream added, broadcasting: 5\nI0127 23:41:16.884705 99 log.go:172] (0xc0009d26e0) Reply frame received for 5\nI0127 23:41:16.987296 99 log.go:172] (0xc0009d26e0) Data frame received for 5\nI0127 23:41:16.987425 99 log.go:172] (0xc00065db80) (5) Data frame handling\nI0127 23:41:16.987466 99 log.go:172] (0xc00065db80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0127 23:41:16.987541 99 log.go:172] (0xc0009d26e0) Data frame received for 5\nI0127 23:41:16.987575 99 log.go:172] (0xc00065db80) (5) Data frame handling\nI0127 23:41:16.987596 99 log.go:172] (0xc00065db80) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0127 23:41:16.989268 99 log.go:172] (0xc0009d26e0) Data frame received for 5\nI0127 23:41:16.989292 99 log.go:172] (0xc00065db80) (5) Data frame handling\nI0127 23:41:16.989330 99 log.go:172] (0xc00065db80) (5) Data frame sent\n+ true\nI0127 23:41:16.989594 99 log.go:172] (0xc0009d26e0) Data frame received for 3\nI0127 23:41:16.989619 99 log.go:172] (0xc0009bc000) (3) Data frame handling\nI0127 23:41:16.989635 99 log.go:172] (0xc0009bc000) (3) Data frame sent\nI0127 23:41:17.091742 99 log.go:172] (0xc0009d26e0) (0xc0009bc000) Stream removed, broadcasting: 3\nI0127 23:41:17.092213 99 log.go:172] (0xc0009d26e0) Data frame received for 1\nI0127 23:41:17.092260 99 log.go:172] (0xc000968000) (1) Data frame handling\nI0127 23:41:17.092282 99 log.go:172] (0xc000968000) (1) Data frame sent\nI0127 23:41:17.092299 99 log.go:172] (0xc0009d26e0) (0xc000968000) Stream removed, broadcasting: 1\nI0127 23:41:17.093523 99 log.go:172] (0xc0009d26e0) (0xc00065db80) Stream removed, broadcasting: 5\nI0127 23:41:17.093580 99 log.go:172] (0xc0009d26e0) (0xc000968000) Stream removed, broadcasting: 1\nI0127 23:41:17.093590 99 log.go:172] (0xc0009d26e0) (0xc0009bc000) Stream removed, broadcasting: 3\nI0127 23:41:17.093606 99 log.go:172] (0xc0009d26e0) (0xc00065db80) Stream removed, broadcasting: 5\n" Jan 27 23:41:17.103: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 23:41:17.103: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 23:41:17.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:41:17.463: INFO: stderr: "I0127 23:41:17.306278 120 log.go:172] (0xc00050f130) (0xc0005a5f40) Create stream\nI0127 23:41:17.306514 120 log.go:172] (0xc00050f130) (0xc0005a5f40) Stream added, broadcasting: 1\nI0127 23:41:17.311923 120 log.go:172] (0xc00050f130) Reply frame received for 1\nI0127 23:41:17.311978 120 log.go:172] (0xc00050f130) (0xc0004d2820) Create stream\nI0127 23:41:17.311994 120 log.go:172] (0xc00050f130) (0xc0004d2820) Stream added, broadcasting: 3\nI0127 23:41:17.312948 120 log.go:172] (0xc00050f130) Reply frame received for 3\nI0127 23:41:17.312969 120 log.go:172] (0xc00050f130) (0xc00072e000) Create stream\nI0127 23:41:17.312975 120 log.go:172] (0xc00050f130) (0xc00072e000) Stream added, broadcasting: 5\nI0127 23:41:17.314516 120 log.go:172] (0xc00050f130) Reply frame received for 5\nI0127 23:41:17.379079 120 log.go:172] (0xc00050f130) Data frame received for 5\nI0127 23:41:17.379142 120 log.go:172] (0xc00072e000) (5) Data frame handling\nI0127 23:41:17.379162 120 log.go:172] (0xc00072e000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0127 23:41:17.380785 120 log.go:172] (0xc00050f130) Data frame received for 5\nI0127 23:41:17.380816 120 log.go:172] (0xc00072e000) (5) Data frame handling\nI0127 23:41:17.380830 120 log.go:172] (0xc00072e000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0127 23:41:17.380847 120 log.go:172] (0xc00050f130) Data frame received for 3\nI0127 23:41:17.380872 120 log.go:172] (0xc0004d2820) (3) Data frame handling\nI0127 23:41:17.380899 120 log.go:172] (0xc0004d2820) (3) Data frame sent\nI0127 23:41:17.381130 120 log.go:172] (0xc00050f130) Data frame received for 5\nI0127 23:41:17.381142 120 log.go:172] (0xc00072e000) (5) Data frame handling\nI0127 23:41:17.381157 120 log.go:172] (0xc00072e000) (5) Data frame sent\nI0127 23:41:17.381165 120 log.go:172] (0xc00050f130) Data frame received for 5\nI0127 23:41:17.381174 120 log.go:172] (0xc00072e000) (5) Data frame handling\n+ true\nI0127 23:41:17.381197 120 log.go:172] (0xc00072e000) (5) Data frame sent\nI0127 23:41:17.451731 120 log.go:172] (0xc00050f130) (0xc0004d2820) Stream removed, broadcasting: 3\nI0127 23:41:17.451826 120 log.go:172] (0xc00050f130) Data frame received for 1\nI0127 23:41:17.451877 120 log.go:172] (0xc00050f130) (0xc00072e000) Stream removed, broadcasting: 5\nI0127 23:41:17.451925 120 log.go:172] (0xc0005a5f40) (1) Data frame handling\nI0127 23:41:17.451944 120 log.go:172] (0xc0005a5f40) (1) Data frame sent\nI0127 23:41:17.451953 120 log.go:172] (0xc00050f130) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0127 23:41:17.451975 120 log.go:172] (0xc00050f130) Go away received\nI0127 23:41:17.452707 120 log.go:172] (0xc00050f130) (0xc0005a5f40) Stream removed, broadcasting: 1\nI0127 23:41:17.452721 120 log.go:172] (0xc00050f130) (0xc0004d2820) Stream removed, broadcasting: 3\nI0127 23:41:17.452725 120 log.go:172] (0xc00050f130) (0xc00072e000) Stream removed, broadcasting: 5\n" Jan 27 23:41:17.463: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 27 23:41:17.463: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 27 23:41:17.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 27 23:41:17.471: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 27 23:41:17.471: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 27 23:41:17.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 23:41:17.868: INFO: stderr: "I0127 23:41:17.684379 143 log.go:172] (0xc000a66f20) (0xc000a6c3c0) Create stream\nI0127 23:41:17.684961 143 log.go:172] (0xc000a66f20) (0xc000a6c3c0) Stream added, broadcasting: 1\nI0127 23:41:17.693151 143 log.go:172] (0xc000a66f20) Reply frame received for 1\nI0127 23:41:17.693285 143 log.go:172] (0xc000a66f20) (0xc0005ce6e0) Create stream\nI0127 23:41:17.693318 143 log.go:172] (0xc000a66f20) (0xc0005ce6e0) Stream added, broadcasting: 3\nI0127 23:41:17.694864 143 log.go:172] (0xc000a66f20) Reply frame received for 3\nI0127 23:41:17.694910 143 log.go:172] (0xc000a66f20) (0xc0008d6000) Create stream\nI0127 23:41:17.694924 143 log.go:172] (0xc000a66f20) (0xc0008d6000) Stream added, broadcasting: 5\nI0127 23:41:17.696825 143 log.go:172] (0xc000a66f20) Reply frame received for 5\nI0127 23:41:17.764708 143 log.go:172] (0xc000a66f20) Data frame received for 3\nI0127 23:41:17.764835 143 log.go:172] (0xc0005ce6e0) (3) Data frame handling\nI0127 23:41:17.764863 143 log.go:172] (0xc0005ce6e0) (3) Data frame sent\nI0127 23:41:17.764920 143 log.go:172] (0xc000a66f20) Data frame received for 5\nI0127 23:41:17.764939 143 log.go:172] (0xc0008d6000) (5) Data frame handling\nI0127 23:41:17.764949 143 log.go:172] (0xc0008d6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0127 23:41:17.850135 143 log.go:172] (0xc000a66f20) Data frame received for 1\nI0127 23:41:17.850388 143 log.go:172] (0xc000a6c3c0) (1) Data frame handling\nI0127 23:41:17.850528 143 log.go:172] (0xc000a6c3c0) (1) Data frame sent\nI0127 23:41:17.850618 143 log.go:172] (0xc000a66f20) (0xc000a6c3c0) Stream removed, broadcasting: 1\nI0127 23:41:17.851100 143 log.go:172] (0xc000a66f20) (0xc0005ce6e0) Stream removed, broadcasting: 3\nI0127 23:41:17.851515 143 log.go:172] (0xc000a66f20) (0xc0008d6000) Stream removed, broadcasting: 5\nI0127 23:41:17.852086 143 log.go:172] (0xc000a66f20) Go away received\nI0127 23:41:17.852552 143 log.go:172] (0xc000a66f20) (0xc000a6c3c0) Stream removed, broadcasting: 1\nI0127 23:41:17.852606 143 log.go:172] (0xc000a66f20) (0xc0005ce6e0) Stream removed, broadcasting: 3\nI0127 23:41:17.852629 143 log.go:172] (0xc000a66f20) (0xc0008d6000) Stream removed, broadcasting: 5\n" Jan 27 23:41:17.868: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 23:41:17.868: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 23:41:17.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 23:41:18.402: INFO: stderr: "I0127 23:41:18.164036 164 log.go:172] (0xc00094c000) (0xc000a46140) Create stream\nI0127 23:41:18.164300 164 log.go:172] (0xc00094c000) (0xc000a46140) Stream added, broadcasting: 1\nI0127 23:41:18.168022 164 log.go:172] (0xc00094c000) Reply frame received for 1\nI0127 23:41:18.168142 164 log.go:172] (0xc00094c000) (0xc000a461e0) Create stream\nI0127 23:41:18.168153 164 log.go:172] (0xc00094c000) (0xc000a461e0) Stream added, broadcasting: 3\nI0127 23:41:18.169288 164 log.go:172] (0xc00094c000) Reply frame received for 3\nI0127 23:41:18.169311 164 log.go:172] (0xc00094c000) (0xc000934140) Create stream\nI0127 23:41:18.169317 164 log.go:172] (0xc00094c000) (0xc000934140) Stream added, broadcasting: 5\nI0127 23:41:18.171045 164 log.go:172] (0xc00094c000) Reply frame received for 5\nI0127 23:41:18.240260 164 log.go:172] (0xc00094c000) Data frame received for 5\nI0127 23:41:18.240326 164 log.go:172] (0xc000934140) (5) Data frame handling\nI0127 23:41:18.240348 164 log.go:172] (0xc000934140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0127 23:41:18.285143 164 log.go:172] (0xc00094c000) Data frame received for 3\nI0127 23:41:18.285229 164 log.go:172] (0xc000a461e0) (3) Data frame handling\nI0127 23:41:18.285463 164 log.go:172] (0xc000a461e0) (3) Data frame sent\nI0127 23:41:18.385621 164 log.go:172] (0xc00094c000) Data frame received for 1\nI0127 23:41:18.385694 164 log.go:172] (0xc00094c000) (0xc000934140) Stream removed, broadcasting: 5\nI0127 23:41:18.385757 164 log.go:172] (0xc000a46140) (1) Data frame handling\nI0127 23:41:18.385772 164 log.go:172] (0xc000a46140) (1) Data frame sent\nI0127 23:41:18.385802 164 log.go:172] (0xc00094c000) (0xc000a461e0) Stream removed, broadcasting: 3\nI0127 23:41:18.385838 164 log.go:172] (0xc00094c000) (0xc000a46140) Stream removed, broadcasting: 1\nI0127 23:41:18.385936 164 log.go:172] (0xc00094c000) Go away received\nI0127 23:41:18.392652 164 log.go:172] (0xc00094c000) (0xc000a46140) Stream removed, broadcasting: 1\nI0127 23:41:18.393070 164 log.go:172] (0xc00094c000) (0xc000a461e0) Stream removed, broadcasting: 3\nI0127 23:41:18.393102 164 log.go:172] (0xc00094c000) (0xc000934140) Stream removed, broadcasting: 5\n" Jan 27 23:41:18.402: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 23:41:18.402: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 23:41:18.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 27 23:41:18.831: INFO: stderr: "I0127 23:41:18.611429 184 log.go:172] (0xc000bb6370) (0xc000a54280) Create stream\nI0127 23:41:18.611883 184 log.go:172] (0xc000bb6370) (0xc000a54280) Stream added, broadcasting: 1\nI0127 23:41:18.623584 184 log.go:172] (0xc000bb6370) Reply frame received for 1\nI0127 23:41:18.623737 184 log.go:172] (0xc000bb6370) (0xc000a94140) Create stream\nI0127 23:41:18.623753 184 log.go:172] (0xc000bb6370) (0xc000a94140) Stream added, broadcasting: 3\nI0127 23:41:18.625979 184 log.go:172] (0xc000bb6370) Reply frame received for 3\nI0127 23:41:18.626071 184 log.go:172] (0xc000bb6370) (0xc000a54320) Create stream\nI0127 23:41:18.626104 184 log.go:172] (0xc000bb6370) (0xc000a54320) Stream added, broadcasting: 5\nI0127 23:41:18.627120 184 log.go:172] (0xc000bb6370) Reply frame received for 5\nI0127 23:41:18.701873 184 log.go:172] (0xc000bb6370) Data frame received for 5\nI0127 23:41:18.701957 184 log.go:172] (0xc000a54320) (5) Data frame handling\nI0127 23:41:18.701978 184 log.go:172] (0xc000a54320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0127 23:41:18.740621 184 log.go:172] (0xc000bb6370) Data frame received for 3\nI0127 23:41:18.740683 184 log.go:172] (0xc000a94140) (3) Data frame handling\nI0127 23:41:18.740700 184 log.go:172] (0xc000a94140) (3) Data frame sent\nI0127 23:41:18.810864 184 log.go:172] (0xc000bb6370) Data frame received for 1\nI0127 23:41:18.811496 184 log.go:172] (0xc000bb6370) (0xc000a54320) Stream removed, broadcasting: 5\nI0127 23:41:18.811633 184 log.go:172] (0xc000a54280) (1) Data frame handling\nI0127 23:41:18.811717 184 log.go:172] (0xc000a54280) (1) Data frame sent\nI0127 23:41:18.811827 184 log.go:172] (0xc000bb6370) (0xc000a94140) Stream removed, broadcasting: 3\nI0127 23:41:18.811880 184 log.go:172] (0xc000bb6370) (0xc000a54280) Stream removed, broadcasting: 1\nI0127 23:41:18.811943 184 log.go:172] (0xc000bb6370) Go away received\nI0127 23:41:18.815380 184 log.go:172] (0xc000bb6370) (0xc000a54280) Stream removed, broadcasting: 1\nI0127 23:41:18.815545 184 log.go:172] (0xc000bb6370) (0xc000a94140) Stream removed, broadcasting: 3\nI0127 23:41:18.815613 184 log.go:172] (0xc000bb6370) (0xc000a54320) Stream removed, broadcasting: 5\n" Jan 27 23:41:18.831: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 27 23:41:18.831: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 27 23:41:18.831: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 23:41:18.837: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 27 23:41:28.861: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 27 23:41:28.861: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 27 23:41:28.861: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 27 23:41:28.891: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:28.891: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:28.891: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:28.891: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:28.891: INFO: Jan 27 23:41:28.891: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:30.608: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:30.608: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:30.608: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:30.608: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:30.608: INFO: Jan 27 23:41:30.608: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:31.614: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:31.614: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:31.614: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:31.614: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:31.614: INFO: Jan 27 23:41:31.614: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:32.622: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:32.622: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:32.622: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:32.622: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:32.622: INFO: Jan 27 23:41:32.622: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:33.631: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:33.631: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:33.631: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:33.631: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:33.631: INFO: Jan 27 23:41:33.631: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:34.637: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:34.637: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:34.637: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:34.637: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:34.637: INFO: Jan 27 23:41:34.637: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:35.644: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:35.644: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:35.644: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:35.644: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:35.644: INFO: Jan 27 23:41:35.644: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:36.653: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:36.654: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:36.654: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:36.654: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:36.654: INFO: Jan 27 23:41:36.654: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:37.663: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:37.663: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:37.663: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:37.663: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:37.663: INFO: Jan 27 23:41:37.663: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 27 23:41:38.670: INFO: POD NODE PHASE GRACE CONDITIONS Jan 27 23:41:38.670: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:40:45 +0000 UTC }] Jan 27 23:41:38.670: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:38.670: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-27 23:41:06 +0000 UTC }] Jan 27 23:41:38.670: INFO: Jan 27 23:41:38.670: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4332 Jan 27 23:41:39.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:41:39.932: INFO: rc: 1 Jan 27 23:41:39.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 27 23:41:49.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:41:50.099: INFO: rc: 1 Jan 27 23:41:50.099: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:00.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:00.241: INFO: rc: 1 Jan 27 23:42:00.241: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:10.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:10.482: INFO: rc: 1 Jan 27 23:42:10.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:20.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:20.643: INFO: rc: 1 Jan 27 23:42:20.644: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:30.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:30.800: INFO: rc: 1 Jan 27 23:42:30.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:40.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:41.003: INFO: rc: 1 Jan 27 23:42:41.003: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:42:51.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:42:51.191: INFO: rc: 1 Jan 27 23:42:51.191: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:01.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:01.424: INFO: rc: 1 Jan 27 23:43:01.424: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:11.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:11.588: INFO: rc: 1 Jan 27 23:43:11.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:21.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:21.818: INFO: rc: 1 Jan 27 23:43:21.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:31.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:32.032: INFO: rc: 1 Jan 27 23:43:32.032: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:42.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:42.205: INFO: rc: 1 Jan 27 23:43:42.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:43:52.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:43:52.401: INFO: rc: 1 Jan 27 23:43:52.402: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:02.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:02.644: INFO: rc: 1 Jan 27 23:44:02.644: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:12.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:12.788: INFO: rc: 1 Jan 27 23:44:12.788: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:22.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:22.990: INFO: rc: 1 Jan 27 23:44:22.990: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:32.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:33.148: INFO: rc: 1 Jan 27 23:44:33.148: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:43.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:43.326: INFO: rc: 1 Jan 27 23:44:43.326: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:44:53.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:44:53.537: INFO: rc: 1 Jan 27 23:44:53.537: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:03.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:03.713: INFO: rc: 1 Jan 27 23:45:03.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:13.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:13.917: INFO: rc: 1 Jan 27 23:45:13.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:23.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:24.167: INFO: rc: 1 Jan 27 23:45:24.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:34.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:34.285: INFO: rc: 1 Jan 27 23:45:34.285: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:44.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:44.463: INFO: rc: 1 Jan 27 23:45:44.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:45:54.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:45:54.647: INFO: rc: 1 Jan 27 23:45:54.647: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:46:04.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:46:04.866: INFO: rc: 1 Jan 27 23:46:04.866: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:46:14.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:46:15.037: INFO: rc: 1 Jan 27 23:46:15.037: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:46:25.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:46:25.192: INFO: rc: 1 Jan 27 23:46:25.192: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:46:35.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:46:35.408: INFO: rc: 1 Jan 27 23:46:35.409: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 27 23:46:45.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4332 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 27 23:46:45.626: INFO: rc: 1 Jan 27 23:46:45.626: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 27 23:46:45.626: INFO: Scaling statefulset ss to 0 Jan 27 23:46:45.653: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jan 27 23:46:45.656: INFO: Deleting all statefulset in ns statefulset-4332 Jan 27 23:46:45.661: INFO: Scaling statefulset ss to 0 Jan 27 23:46:45.673: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 23:46:45.677: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:46:45.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4332" for this suite. • [SLOW TEST:360.338 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":5,"skipped":64,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:46:45.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 27 23:46:46.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 27 23:46:48.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:46:50.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:46:52.758: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765606, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 27 23:46:55.793: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:46:55.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5755" for this suite. STEP: Destroying namespace "webhook-5755-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.307 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":6,"skipped":74,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:46:56.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Jan 27 23:46:56.168: INFO: Waiting up to 5m0s for pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d" in namespace "var-expansion-7477" to be "success or failure" Jan 27 23:46:56.171: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502576ms Jan 27 23:46:58.176: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007469171s Jan 27 23:47:00.184: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015408999s Jan 27 23:47:02.195: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026177267s Jan 27 23:47:04.211: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043054632s STEP: Saw pod success Jan 27 23:47:04.212: INFO: Pod "var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d" satisfied condition "success or failure" Jan 27 23:47:04.214: INFO: Trying to get logs from node jerma-node pod var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d container dapi-container: STEP: delete the pod Jan 27 23:47:04.294: INFO: Waiting for pod var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d to disappear Jan 27 23:47:04.302: INFO: Pod var-expansion-2f99a131-9e00-4e6b-8177-b8d11057e04d no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:04.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7477" for this suite. • [SLOW TEST:8.320 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":7,"skipped":87,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:04.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test env composition Jan 27 23:47:04.413: INFO: Waiting up to 5m0s for pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560" in namespace "var-expansion-3251" to be "success or failure" Jan 27 23:47:04.417: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560": Phase="Pending", Reason="", readiness=false. Elapsed: 3.301053ms Jan 27 23:47:06.426: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012825915s Jan 27 23:47:08.433: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019504172s Jan 27 23:47:10.440: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026293746s Jan 27 23:47:12.447: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033613269s STEP: Saw pod success Jan 27 23:47:12.447: INFO: Pod "var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560" satisfied condition "success or failure" Jan 27 23:47:12.451: INFO: Trying to get logs from node jerma-node pod var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560 container dapi-container: STEP: delete the pod Jan 27 23:47:12.784: INFO: Waiting for pod var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560 to disappear Jan 27 23:47:12.803: INFO: Pod var-expansion-3e1e6275-e400-4fa8-beea-f78f742a7560 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:12.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3251" for this suite. • [SLOW TEST:8.473 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":104,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:12.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-map-80dcd71f-d980-4d00-a832-497a90161786 STEP: Creating a pod to test consume secrets Jan 27 23:47:12.972: INFO: Waiting up to 5m0s for pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267" in namespace "secrets-4488" to be "success or failure" Jan 27 23:47:12.979: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267": Phase="Pending", Reason="", readiness=false. Elapsed: 6.443307ms Jan 27 23:47:15.014: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041780378s Jan 27 23:47:17.021: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048015459s Jan 27 23:47:19.065: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092250864s Jan 27 23:47:21.072: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099365474s STEP: Saw pod success Jan 27 23:47:21.072: INFO: Pod "pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267" satisfied condition "success or failure" Jan 27 23:47:21.076: INFO: Trying to get logs from node jerma-node pod pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267 container secret-volume-test: STEP: delete the pod Jan 27 23:47:21.239: INFO: Waiting for pod pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267 to disappear Jan 27 23:47:21.246: INFO: Pod pod-secrets-b5d40dda-1cb5-475d-aef6-efc8be265267 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:21.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4488" for this suite. • [SLOW TEST:8.505 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":9,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:21.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 27 23:47:21.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226" in namespace "projected-4082" to be "success or failure" Jan 27 23:47:21.439: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226": Phase="Pending", Reason="", readiness=false. Elapsed: 8.59302ms Jan 27 23:47:23.517: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086268317s Jan 27 23:47:25.524: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09331482s Jan 27 23:47:27.530: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098686128s Jan 27 23:47:29.555: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124515597s STEP: Saw pod success Jan 27 23:47:29.555: INFO: Pod "downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226" satisfied condition "success or failure" Jan 27 23:47:29.564: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226 container client-container: STEP: delete the pod Jan 27 23:47:29.605: INFO: Waiting for pod downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226 to disappear Jan 27 23:47:29.631: INFO: Pod downwardapi-volume-f471d2b1-f4fe-42ac-a1d0-79115606c226 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:29.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4082" for this suite. • [SLOW TEST:8.322 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":10,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:29.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-5cb58729-654c-49a7-b133-e94cc7880af8 STEP: Creating a pod to test consume configMaps Jan 27 23:47:29.731: INFO: Waiting up to 5m0s for pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b" in namespace "configmap-6237" to be "success or failure" Jan 27 23:47:29.781: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 49.456712ms Jan 27 23:47:31.791: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059555846s Jan 27 23:47:33.799: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06769484s Jan 27 23:47:35.808: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076607629s Jan 27 23:47:37.826: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094651899s STEP: Saw pod success Jan 27 23:47:37.826: INFO: Pod "pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b" satisfied condition "success or failure" Jan 27 23:47:37.830: INFO: Trying to get logs from node jerma-node pod pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b container configmap-volume-test: STEP: delete the pod Jan 27 23:47:37.870: INFO: Waiting for pod pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b to disappear Jan 27 23:47:37.873: INFO: Pod pod-configmaps-023c5b7f-461c-4709-abb8-29872a17db5b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6237" for this suite. • [SLOW TEST:8.238 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":212,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:37.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:37.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5118" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":12,"skipped":227,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:38.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name secret-emptykey-test-951f512a-c656-4ce1-a974-d7bfa4325f1c [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:38.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2074" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":13,"skipped":232,"failed":0} SSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:38.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service endpoint-test2 in namespace services-3756 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3756 to expose endpoints map[] Jan 27 23:47:38.399: INFO: Get endpoints failed (10.718891ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jan 27 23:47:39.404: INFO: successfully validated that service endpoint-test2 in namespace services-3756 exposes endpoints map[] (1.016616353s elapsed) STEP: Creating pod pod1 in namespace services-3756 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3756 to expose endpoints map[pod1:[80]] Jan 27 23:47:43.587: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.171005085s elapsed, will retry) Jan 27 23:47:45.820: INFO: successfully validated that service endpoint-test2 in namespace services-3756 exposes endpoints map[pod1:[80]] (6.404333129s elapsed) STEP: Creating pod pod2 in namespace services-3756 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3756 to expose endpoints map[pod1:[80] pod2:[80]] Jan 27 23:47:50.393: INFO: Unexpected endpoints: found map[fdebb793-e8a0-4fb0-8ce0-364b9ec48f9a:[80]], expected map[pod1:[80] pod2:[80]] (4.552793297s elapsed, will retry) Jan 27 23:47:53.447: INFO: successfully validated that service endpoint-test2 in namespace services-3756 exposes endpoints map[pod1:[80] pod2:[80]] (7.607263957s elapsed) STEP: Deleting pod pod1 in namespace services-3756 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3756 to expose endpoints map[pod2:[80]] Jan 27 23:47:53.487: INFO: successfully validated that service endpoint-test2 in namespace services-3756 exposes endpoints map[pod2:[80]] (23.022484ms elapsed) STEP: Deleting pod pod2 in namespace services-3756 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3756 to expose endpoints map[] Jan 27 23:47:54.582: INFO: successfully validated that service endpoint-test2 in namespace services-3756 exposes endpoints map[] (1.081665538s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:47:54.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3756" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:16.571 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":280,"completed":14,"skipped":235,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:47:54.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-142.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-142.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-142.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-142.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-142.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 27 23:48:10.892: INFO: DNS probes using dns-142/dns-test-116290e7-6a47-47f1-b48a-707a7e4bfe43 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:48:10.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-142" for this suite. • [SLOW TEST:16.207 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":15,"skipped":240,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:48:10.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 27 23:48:11.102: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 27 23:48:11.128: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 27 23:48:16.145: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 27 23:48:20.157: INFO: Creating deployment "test-rolling-update-deployment" Jan 27 23:48:20.163: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 27 23:48:20.201: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 27 23:48:22.211: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 27 23:48:22.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:48:24.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:48:26.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715765700, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:48:28.220: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jan 27 23:48:28.233: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7594 /apis/apps/v1/namespaces/deployment-7594/deployments/test-rolling-update-deployment 007cface-0ab2-483c-9de7-f356ede54e75 4767122 1 2020-01-27 23:48:20 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00281d898 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-27 23:48:20 +0000 UTC,LastTransitionTime:2020-01-27 23:48:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-27 23:48:27 +0000 UTC,LastTransitionTime:2020-01-27 23:48:20 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 27 23:48:28.248: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7594 /apis/apps/v1/namespaces/deployment-7594/replicasets/test-rolling-update-deployment-67cf4f6444 2f12f94c-d014-4cba-adff-ae0b4e86423a 4767111 1 2020-01-27 23:48:20 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 007cface-0ab2-483c-9de7-f356ede54e75 0xc0028a5ae7 0xc0028a5ae8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028a5bc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 27 23:48:28.248: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 27 23:48:28.248: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7594 /apis/apps/v1/namespaces/deployment-7594/replicasets/test-rolling-update-controller 56e31257-763c-4eac-b600-0e3ec548ebd6 4767121 2 2020-01-27 23:48:11 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 007cface-0ab2-483c-9de7-f356ede54e75 0xc0028a5957 0xc0028a5958}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0028a59d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 27 23:48:28.253: INFO: Pod "test-rolling-update-deployment-67cf4f6444-mzvfr" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-mzvfr test-rolling-update-deployment-67cf4f6444- deployment-7594 /api/v1/namespaces/deployment-7594/pods/test-rolling-update-deployment-67cf4f6444-mzvfr c2a55e54-cb92-4818-96b5-d7e78176dda9 4767110 0 2020-01-27 23:48:20 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 2f12f94c-d014-4cba-adff-ae0b4e86423a 0xc00281dc57 0xc00281dc58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7mctv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7mctv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7mctv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-27 23:48:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-27 23:48:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-27 23:48:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-27 23:48:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-27 23:48:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-27 23:48:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://6eea6a443197b7f42cbafe6c7aa89b3078f61edb22364af005a653f567c2fcb1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:48:28.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7594" for this suite. • [SLOW TEST:17.278 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":16,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:48:28.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:48:35.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4062" for this suite. STEP: Destroying namespace "nsdeletetest-5570" for this suite. Jan 27 23:48:35.776: INFO: Namespace nsdeletetest-5570 was already deleted STEP: Destroying namespace "nsdeletetest-8210" for this suite. • [SLOW TEST:7.519 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":17,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:48:35.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating service multi-endpoint-test in namespace services-3499 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3499 to expose endpoints map[] Jan 27 23:48:35.979: INFO: Get endpoints failed (10.34515ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jan 27 23:48:37.019: INFO: successfully validated that service multi-endpoint-test in namespace services-3499 exposes endpoints map[] (1.050466871s elapsed) STEP: Creating pod pod1 in namespace services-3499 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3499 to expose endpoints map[pod1:[100]] Jan 27 23:48:41.190: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.128098993s elapsed, will retry) Jan 27 23:48:44.231: INFO: successfully validated that service multi-endpoint-test in namespace services-3499 exposes endpoints map[pod1:[100]] (7.1683152s elapsed) STEP: Creating pod pod2 in namespace services-3499 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3499 to expose endpoints map[pod1:[100] pod2:[101]] Jan 27 23:48:49.236: INFO: Unexpected endpoints: found map[7920fa26-ab25-4ec2-932c-5e1d223512d1:[100]], expected map[pod1:[100] pod2:[101]] (4.997543775s elapsed, will retry) Jan 27 23:48:51.339: INFO: successfully validated that service multi-endpoint-test in namespace services-3499 exposes endpoints map[pod1:[100] pod2:[101]] (7.100520678s elapsed) STEP: Deleting pod pod1 in namespace services-3499 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3499 to expose endpoints map[pod2:[101]] Jan 27 23:48:52.420: INFO: successfully validated that service multi-endpoint-test in namespace services-3499 exposes endpoints map[pod2:[101]] (1.07551672s elapsed) STEP: Deleting pod pod2 in namespace services-3499 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3499 to expose endpoints map[] Jan 27 23:48:54.321: INFO: successfully validated that service multi-endpoint-test in namespace services-3499 exposes endpoints map[] (1.892765652s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:48:54.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3499" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:19.029 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":280,"completed":18,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:48:54.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 27 23:48:54.873: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:49:03.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4190" for this suite. • [SLOW TEST:8.319 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":280,"completed":19,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:49:03.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:49:35.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-77" for this suite. • [SLOW TEST:32.190 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":20,"skipped":362,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:49:35.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 27 23:49:35.498: INFO: Waiting up to 5m0s for pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a" in namespace "emptydir-7640" to be "success or failure" Jan 27 23:49:35.504: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.517468ms Jan 27 23:49:37.509: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011079595s Jan 27 23:49:39.516: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01757808s Jan 27 23:49:41.543: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044517236s Jan 27 23:49:43.565: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067109039s STEP: Saw pod success Jan 27 23:49:43.566: INFO: Pod "pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a" satisfied condition "success or failure" Jan 27 23:49:43.570: INFO: Trying to get logs from node jerma-node pod pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a container test-container: STEP: delete the pod Jan 27 23:49:43.654: INFO: Waiting for pod pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a to disappear Jan 27 23:49:43.708: INFO: Pod pod-1f17b27a-51e9-4eb2-9212-89fcb51ae73a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:49:43.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7640" for this suite. • [SLOW TEST:8.398 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":21,"skipped":388,"failed":0} SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:49:43.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:49:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2648" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":22,"skipped":393,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:49:43.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jan 27 23:50:00.145: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:00.152: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:02.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:02.157: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:04.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:04.262: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:06.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:06.161: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:08.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:08.159: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:10.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:10.159: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:12.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:12.181: INFO: Pod pod-with-poststart-http-hook still exists Jan 27 23:50:14.152: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 27 23:50:14.161: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:50:14.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6168" for this suite. • [SLOW TEST:30.299 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":23,"skipped":398,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:50:14.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 27 23:50:14.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-14' Jan 27 23:50:16.333: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 27 23:50:16.333: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Jan 27 23:50:16.367: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-qdg5q] Jan 27 23:50:16.368: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-qdg5q" in namespace "kubectl-14" to be "running and ready" Jan 27 23:50:16.435: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Pending", Reason="", readiness=false. Elapsed: 67.179359ms Jan 27 23:50:18.448: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079947483s Jan 27 23:50:20.455: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087108347s Jan 27 23:50:22.464: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096324026s Jan 27 23:50:24.472: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104163816s Jan 27 23:50:26.481: INFO: Pod "e2e-test-httpd-rc-qdg5q": Phase="Running", Reason="", readiness=true. Elapsed: 10.113661541s Jan 27 23:50:26.482: INFO: Pod "e2e-test-httpd-rc-qdg5q" satisfied condition "running and ready" Jan 27 23:50:26.482: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-qdg5q] Jan 27 23:50:26.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-14' Jan 27 23:50:26.742: INFO: stderr: "" Jan 27 23:50:26.742: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.2. Set the 'ServerName' directive globally to suppress this message\n[Mon Jan 27 23:50:23.053043 2020] [mpm_event:notice] [pid 1:tid 140637297433448] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Jan 27 23:50:23.053120 2020] [core:notice] [pid 1:tid 140637297433448] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Jan 27 23:50:26.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-14' Jan 27 23:50:26.917: INFO: stderr: "" Jan 27 23:50:26.917: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:50:26.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-14" for this suite. • [SLOW TEST:12.753 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":24,"skipped":406,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:50:26.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod test-webserver-c940f116-e580-4bcf-8911-7a466438f04c in namespace container-probe-3409 Jan 27 23:50:35.057: INFO: Started pod test-webserver-c940f116-e580-4bcf-8911-7a466438f04c in namespace container-probe-3409 STEP: checking the pod's current state and verifying that restartCount is present Jan 27 23:50:35.071: INFO: Initial restart count of pod test-webserver-c940f116-e580-4bcf-8911-7a466438f04c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:54:36.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3409" for this suite. • [SLOW TEST:249.769 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":25,"skipped":419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:54:36.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466 STEP: creating an pod Jan 27 23:54:36.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-996 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 27 23:54:36.996: INFO: stderr: "" Jan 27 23:54:36.996: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Jan 27 23:54:36.996: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 27 23:54:36.997: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-996" to be "running and ready, or succeeded" Jan 27 23:54:37.008: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 11.565803ms Jan 27 23:54:39.014: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017432438s Jan 27 23:54:41.033: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03642372s Jan 27 23:54:43.059: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061879985s Jan 27 23:54:45.063: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.06647488s Jan 27 23:54:45.063: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 27 23:54:45.063: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 27 23:54:45.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996' Jan 27 23:54:45.230: INFO: stderr: "" Jan 27 23:54:45.230: INFO: stdout: "I0127 23:54:43.068624 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/q4lt 460\nI0127 23:54:43.269147 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/txt 579\nI0127 23:54:43.469190 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/sbbx 375\nI0127 23:54:43.669466 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/9wrr 392\nI0127 23:54:43.869324 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/55c 469\nI0127 23:54:44.069053 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/v4p 557\nI0127 23:54:44.268998 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/pd9 212\nI0127 23:54:44.469173 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/qhpr 434\nI0127 23:54:44.669009 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/lzfl 504\nI0127 23:54:44.868995 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/6k7v 251\nI0127 23:54:45.068964 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/cjd 259\n" STEP: limiting log lines Jan 27 23:54:45.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996 --tail=1' Jan 27 23:54:45.424: INFO: stderr: "" Jan 27 23:54:45.425: INFO: stdout: "I0127 23:54:45.268933 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/lqx6 421\n" Jan 27 23:54:45.425: INFO: got output "I0127 23:54:45.268933 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/lqx6 421\n" STEP: limiting log bytes Jan 27 23:54:45.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996 --limit-bytes=1' Jan 27 23:54:45.558: INFO: stderr: "" Jan 27 23:54:45.559: INFO: stdout: "I" Jan 27 23:54:45.559: INFO: got output "I" STEP: exposing timestamps Jan 27 23:54:45.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996 --tail=1 --timestamps' Jan 27 23:54:45.741: INFO: stderr: "" Jan 27 23:54:45.741: INFO: stdout: "2020-01-27T23:54:45.670589283Z I0127 23:54:45.669638 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/kdqd 465\n" Jan 27 23:54:45.741: INFO: got output "2020-01-27T23:54:45.670589283Z I0127 23:54:45.669638 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/kdqd 465\n" STEP: restricting to a time range Jan 27 23:54:48.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996 --since=1s' Jan 27 23:54:48.469: INFO: stderr: "" Jan 27 23:54:48.469: INFO: stdout: "I0127 23:54:47.469092 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/7ct 301\nI0127 23:54:47.669105 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/sws 332\nI0127 23:54:47.869144 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/w6p 365\nI0127 23:54:48.068827 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/f9pk 204\nI0127 23:54:48.269607 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/x4n 507\n" Jan 27 23:54:48.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-996 --since=24h' Jan 27 23:54:48.691: INFO: stderr: "" Jan 27 23:54:48.691: INFO: stdout: "I0127 23:54:43.068624 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/q4lt 460\nI0127 23:54:43.269147 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/txt 579\nI0127 23:54:43.469190 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/sbbx 375\nI0127 23:54:43.669466 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/9wrr 392\nI0127 23:54:43.869324 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/55c 469\nI0127 23:54:44.069053 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/v4p 557\nI0127 23:54:44.268998 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/pd9 212\nI0127 23:54:44.469173 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/qhpr 434\nI0127 23:54:44.669009 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/lzfl 504\nI0127 23:54:44.868995 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/6k7v 251\nI0127 23:54:45.068964 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/cjd 259\nI0127 23:54:45.268933 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/lqx6 421\nI0127 23:54:45.468907 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/j4xp 252\nI0127 23:54:45.669638 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/kdqd 465\nI0127 23:54:45.869110 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/ns/pods/fsvk 434\nI0127 23:54:46.069132 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/p624 238\nI0127 23:54:46.269223 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/dwzd 221\nI0127 23:54:46.469325 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/6xz 337\nI0127 23:54:46.672567 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/ptr 546\nI0127 23:54:46.869000 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/swn 280\nI0127 23:54:47.068976 1 logs_generator.go:76] 20 GET /api/v1/namespaces/ns/pods/zp4 469\nI0127 23:54:47.268915 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/x5lx 459\nI0127 23:54:47.469092 1 logs_generator.go:76] 22 POST /api/v1/namespaces/default/pods/7ct 301\nI0127 23:54:47.669105 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/sws 332\nI0127 23:54:47.869144 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/w6p 365\nI0127 23:54:48.068827 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/f9pk 204\nI0127 23:54:48.269607 1 logs_generator.go:76] 26 GET /api/v1/namespaces/ns/pods/x4n 507\nI0127 23:54:48.469310 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/xds 423\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472 Jan 27 23:54:48.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-996' Jan 27 23:55:02.406: INFO: stderr: "" Jan 27 23:55:02.406: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:55:02.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-996" for this suite. • [SLOW TEST:25.768 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":280,"completed":26,"skipped":442,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:55:02.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 27 23:55:03.220: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 27 23:55:05.243: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:55:07.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 27 23:55:09.250: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766103, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 27 23:55:12.374: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 27 23:55:12.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6627-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:55:13.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3941" for this suite. STEP: Destroying namespace "webhook-3941-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:11.073 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":27,"skipped":443,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:55:13.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-413 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-413 STEP: Creating statefulset with conflicting port in namespace statefulset-413 STEP: Waiting until pod test-pod will start running in namespace statefulset-413 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-413 Jan 27 23:55:27.772: INFO: Observed stateful pod in namespace: statefulset-413, name: ss-0, uid: f464860e-8bb2-42c4-ba91-c4c386142488, status phase: Pending. Waiting for statefulset controller to delete. Jan 27 23:55:28.260: INFO: Observed stateful pod in namespace: statefulset-413, name: ss-0, uid: f464860e-8bb2-42c4-ba91-c4c386142488, status phase: Failed. Waiting for statefulset controller to delete. Jan 27 23:55:28.270: INFO: Observed stateful pod in namespace: statefulset-413, name: ss-0, uid: f464860e-8bb2-42c4-ba91-c4c386142488, status phase: Failed. Waiting for statefulset controller to delete. Jan 27 23:55:28.285: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-413 STEP: Removing pod with conflicting port in namespace statefulset-413 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-413 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jan 27 23:55:40.687: INFO: Deleting all statefulset in ns statefulset-413 Jan 27 23:55:40.692: INFO: Scaling statefulset ss to 0 Jan 27 23:56:00.740: INFO: Waiting for statefulset status.replicas updated to 0 Jan 27 23:56:00.744: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:56:00.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-413" for this suite. • [SLOW TEST:47.248 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":28,"skipped":470,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:56:00.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-477 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-477 STEP: creating replication controller externalsvc in namespace services-477 I0127 23:56:01.017146 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-477, replica count: 2 I0127 23:56:04.068379 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 23:56:07.069279 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 23:56:10.069912 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0127 23:56:13.070438 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 27 23:56:13.123: INFO: Creating new exec pod Jan 27 23:56:21.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-477 execpodfgp9z -- /bin/sh -x -c nslookup nodeport-service' Jan 27 23:56:21.570: INFO: stderr: "I0127 23:56:21.375771 1022 log.go:172] (0xc000b45290) (0xc0009965a0) Create stream\nI0127 23:56:21.376232 1022 log.go:172] (0xc000b45290) (0xc0009965a0) Stream added, broadcasting: 1\nI0127 23:56:21.390679 1022 log.go:172] (0xc000b45290) Reply frame received for 1\nI0127 23:56:21.390814 1022 log.go:172] (0xc000b45290) (0xc0006ffc20) Create stream\nI0127 23:56:21.390845 1022 log.go:172] (0xc000b45290) (0xc0006ffc20) Stream added, broadcasting: 3\nI0127 23:56:21.393178 1022 log.go:172] (0xc000b45290) Reply frame received for 3\nI0127 23:56:21.393324 1022 log.go:172] (0xc000b45290) (0xc000648820) Create stream\nI0127 23:56:21.393346 1022 log.go:172] (0xc000b45290) (0xc000648820) Stream added, broadcasting: 5\nI0127 23:56:21.397429 1022 log.go:172] (0xc000b45290) Reply frame received for 5\nI0127 23:56:21.473858 1022 log.go:172] (0xc000b45290) Data frame received for 5\nI0127 23:56:21.473999 1022 log.go:172] (0xc000648820) (5) Data frame handling\nI0127 23:56:21.474037 1022 log.go:172] (0xc000648820) (5) Data frame sent\n+ nslookup nodeport-service\nI0127 23:56:21.488896 1022 log.go:172] (0xc000b45290) Data frame received for 3\nI0127 23:56:21.488954 1022 log.go:172] (0xc0006ffc20) (3) Data frame handling\nI0127 23:56:21.488988 1022 log.go:172] (0xc0006ffc20) (3) Data frame sent\nI0127 23:56:21.491684 1022 log.go:172] (0xc000b45290) Data frame received for 3\nI0127 23:56:21.491770 1022 log.go:172] (0xc0006ffc20) (3) Data frame handling\nI0127 23:56:21.491804 1022 log.go:172] (0xc0006ffc20) (3) Data frame sent\nI0127 23:56:21.556493 1022 log.go:172] (0xc000b45290) Data frame received for 1\nI0127 23:56:21.556606 1022 log.go:172] (0xc0009965a0) (1) Data frame handling\nI0127 23:56:21.556633 1022 log.go:172] (0xc0009965a0) (1) Data frame sent\nI0127 23:56:21.556902 1022 log.go:172] (0xc000b45290) (0xc0009965a0) Stream removed, broadcasting: 1\nI0127 23:56:21.559472 1022 log.go:172] (0xc000b45290) (0xc000648820) Stream removed, broadcasting: 5\nI0127 23:56:21.559831 1022 log.go:172] (0xc000b45290) (0xc0006ffc20) Stream removed, broadcasting: 3\nI0127 23:56:21.560164 1022 log.go:172] (0xc000b45290) (0xc0009965a0) Stream removed, broadcasting: 1\nI0127 23:56:21.560203 1022 log.go:172] (0xc000b45290) (0xc0006ffc20) Stream removed, broadcasting: 3\nI0127 23:56:21.560218 1022 log.go:172] (0xc000b45290) (0xc000648820) Stream removed, broadcasting: 5\nI0127 23:56:21.560495 1022 log.go:172] (0xc000b45290) Go away received\n" Jan 27 23:56:21.570: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-477.svc.cluster.local\tcanonical name = externalsvc.services-477.svc.cluster.local.\nName:\texternalsvc.services-477.svc.cluster.local\nAddress: 10.96.196.157\n\n" STEP: deleting ReplicationController externalsvc in namespace services-477, will wait for the garbage collector to delete the pods Jan 27 23:56:21.637: INFO: Deleting ReplicationController externalsvc took: 11.825603ms Jan 27 23:56:21.737: INFO: Terminating ReplicationController externalsvc pods took: 100.250808ms Jan 27 23:56:32.482: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:56:32.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-477" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:31.764 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":29,"skipped":478,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:56:32.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jan 27 23:56:32.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4096' Jan 27 23:56:32.893: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 27 23:56:32.893: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Jan 27 23:56:32.916: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jan 27 23:56:32.933: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jan 27 23:56:32.987: INFO: scanned /root for discovery docs: Jan 27 23:56:32.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-4096' Jan 27 23:56:55.531: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 27 23:56:55.531: INFO: stdout: "Created e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08\nScaling up e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Jan 27 23:56:55.531: INFO: stdout: "Created e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08\nScaling up e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Jan 27 23:56:55.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-4096' Jan 27 23:56:55.692: INFO: stderr: "" Jan 27 23:56:55.692: INFO: stdout: "e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08-vhx26 " Jan 27 23:56:55.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08-vhx26 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4096' Jan 27 23:56:55.824: INFO: stderr: "" Jan 27 23:56:55.824: INFO: stdout: "true" Jan 27 23:56:55.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08-vhx26 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4096' Jan 27 23:56:55.991: INFO: stderr: "" Jan 27 23:56:55.991: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Jan 27 23:56:55.991: INFO: e2e-test-httpd-rc-8fdfb3a95f2c63987220aa7d9fb56c08-vhx26 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700 Jan 27 23:56:55.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4096' Jan 27 23:56:56.163: INFO: stderr: "" Jan 27 23:56:56.163: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:56:56.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4096" for this suite. • [SLOW TEST:23.619 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":280,"completed":30,"skipped":509,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:56:56.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-90aae0fe-16d8-49ed-9b94-95baec46caab STEP: Creating a pod to test consume secrets Jan 27 23:56:56.323: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9" in namespace "projected-1270" to be "success or failure" Jan 27 23:56:56.334: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.64806ms Jan 27 23:56:58.724: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400693076s Jan 27 23:57:00.733: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41028358s Jan 27 23:57:02.741: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417643402s Jan 27 23:57:04.748: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424583531s Jan 27 23:57:06.754: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.431397571s STEP: Saw pod success Jan 27 23:57:06.755: INFO: Pod "pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9" satisfied condition "success or failure" Jan 27 23:57:06.761: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9 container projected-secret-volume-test: STEP: delete the pod Jan 27 23:57:06.908: INFO: Waiting for pod pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9 to disappear Jan 27 23:57:06.917: INFO: Pod pod-projected-secrets-9b972c06-6bb7-4eee-be09-6f3c15945af9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:57:06.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1270" for this suite. • [SLOW TEST:10.777 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":530,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:57:06.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jan 27 23:57:14.275: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:57:14.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1485" for this suite. • [SLOW TEST:7.418 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":32,"skipped":534,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:57:14.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating replication controller my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5 Jan 27 23:57:14.609: INFO: Pod name my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5: Found 0 pods out of 1 Jan 27 23:57:19.615: INFO: Pod name my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5: Found 1 pods out of 1 Jan 27 23:57:19.615: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5" are running Jan 27 23:57:21.631: INFO: Pod "my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5-gmfcv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 23:57:14 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 23:57:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 23:57:14 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-27 23:57:14 +0000 UTC Reason: Message:}]) Jan 27 23:57:21.631: INFO: Trying to dial the pod Jan 27 23:57:26.656: INFO: Controller my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5: Got expected result from replica 1 [my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5-gmfcv]: "my-hostname-basic-de36b881-865b-4393-955c-e77901ff6ad5-gmfcv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:57:26.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2301" for this suite. • [SLOW TEST:12.297 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":280,"completed":33,"skipped":567,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:57:26.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 27 23:57:26.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602" in namespace "downward-api-1777" to be "success or failure" Jan 27 23:57:26.796: INFO: Pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602": Phase="Pending", Reason="", readiness=false. Elapsed: 22.232712ms Jan 27 23:57:29.113: INFO: Pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.338482612s Jan 27 23:57:31.119: INFO: Pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34475132s Jan 27 23:57:33.125: INFO: Pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.351227369s STEP: Saw pod success Jan 27 23:57:33.126: INFO: Pod "downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602" satisfied condition "success or failure" Jan 27 23:57:33.130: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602 container client-container: STEP: delete the pod Jan 27 23:57:33.214: INFO: Waiting for pod downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602 to disappear Jan 27 23:57:33.223: INFO: Pod downwardapi-volume-5c8cbc65-4d93-49d6-a00b-5ee3bfbc0602 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:57:33.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1777" for this suite. • [SLOW TEST:6.569 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":34,"skipped":571,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:57:33.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-downwardapi-gs6m STEP: Creating a pod to test atomic-volume-subpath Jan 27 23:57:33.605: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gs6m" in namespace "subpath-7909" to be "success or failure" Jan 27 23:57:33.628: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Pending", Reason="", readiness=false. Elapsed: 22.540366ms Jan 27 23:57:35.635: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029495377s Jan 27 23:57:37.645: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039408467s Jan 27 23:57:39.652: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046154857s Jan 27 23:57:41.657: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051650652s Jan 27 23:57:43.669: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 10.063297251s Jan 27 23:57:45.676: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 12.070500755s Jan 27 23:57:47.682: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 14.076984198s Jan 27 23:57:49.688: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 16.082734608s Jan 27 23:57:51.693: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 18.087938659s Jan 27 23:57:53.701: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 20.095565165s Jan 27 23:57:55.717: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 22.111448576s Jan 27 23:57:57.724: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 24.118594148s Jan 27 23:57:59.732: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 26.126349916s Jan 27 23:58:01.739: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Running", Reason="", readiness=true. Elapsed: 28.133439766s Jan 27 23:58:03.747: INFO: Pod "pod-subpath-test-downwardapi-gs6m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.141486406s STEP: Saw pod success Jan 27 23:58:03.747: INFO: Pod "pod-subpath-test-downwardapi-gs6m" satisfied condition "success or failure" Jan 27 23:58:03.751: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-gs6m container test-container-subpath-downwardapi-gs6m: STEP: delete the pod Jan 27 23:58:04.007: INFO: Waiting for pod pod-subpath-test-downwardapi-gs6m to disappear Jan 27 23:58:04.016: INFO: Pod pod-subpath-test-downwardapi-gs6m no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gs6m Jan 27 23:58:04.016: INFO: Deleting pod "pod-subpath-test-downwardapi-gs6m" in namespace "subpath-7909" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:04.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7909" for this suite. • [SLOW TEST:30.793 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":35,"skipped":582,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:04.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 27 23:58:04.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51" in namespace "downward-api-3446" to be "success or failure" Jan 27 23:58:04.203: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 27.06454ms Jan 27 23:58:06.213: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036317271s Jan 27 23:58:08.244: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068053815s Jan 27 23:58:10.250: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073864179s Jan 27 23:58:12.255: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078240171s STEP: Saw pod success Jan 27 23:58:12.255: INFO: Pod "downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51" satisfied condition "success or failure" Jan 27 23:58:12.258: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51 container client-container: STEP: delete the pod Jan 27 23:58:12.347: INFO: Waiting for pod downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51 to disappear Jan 27 23:58:12.364: INFO: Pod downwardapi-volume-6972f81c-4697-4219-b2c9-6458c71a0e51 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:12.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3446" for this suite. • [SLOW TEST:8.339 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":36,"skipped":699,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:12.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 27 23:58:12.508: INFO: Waiting up to 5m0s for pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318" in namespace "downward-api-8286" to be "success or failure" Jan 27 23:58:12.553: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318": Phase="Pending", Reason="", readiness=false. Elapsed: 44.26611ms Jan 27 23:58:14.581: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072380543s Jan 27 23:58:16.589: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080561779s Jan 27 23:58:18.602: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09385564s Jan 27 23:58:20.610: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101196998s STEP: Saw pod success Jan 27 23:58:20.610: INFO: Pod "downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318" satisfied condition "success or failure" Jan 27 23:58:20.614: INFO: Trying to get logs from node jerma-node pod downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318 container dapi-container: STEP: delete the pod Jan 27 23:58:20.743: INFO: Waiting for pod downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318 to disappear Jan 27 23:58:20.760: INFO: Pod downward-api-7cc82c1f-fcd7-43a2-9105-23536c80e318 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:20.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8286" for this suite. • [SLOW TEST:8.412 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":37,"skipped":705,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:20.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:21.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7799" for this suite. STEP: Destroying namespace "nspatchtest-e26a58f4-26cc-4ca6-b3e8-f2a831cefc12-3171" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":38,"skipped":706,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:21.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 27 23:58:21.299: INFO: Waiting up to 5m0s for pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71" in namespace "emptydir-7121" to be "success or failure" Jan 27 23:58:21.480: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71": Phase="Pending", Reason="", readiness=false. Elapsed: 180.217967ms Jan 27 23:58:23.487: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187342777s Jan 27 23:58:25.494: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194288067s Jan 27 23:58:27.503: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203881564s Jan 27 23:58:29.513: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.213700364s STEP: Saw pod success Jan 27 23:58:29.513: INFO: Pod "pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71" satisfied condition "success or failure" Jan 27 23:58:29.520: INFO: Trying to get logs from node jerma-node pod pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71 container test-container: STEP: delete the pod Jan 27 23:58:29.644: INFO: Waiting for pod pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71 to disappear Jan 27 23:58:29.654: INFO: Pod pod-9ca6eb6d-4b6b-474a-b8a9-2ff2f4fb6a71 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:29.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7121" for this suite. • [SLOW TEST:8.525 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":39,"skipped":709,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:29.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jan 27 23:58:40.348: INFO: Successfully updated pod "adopt-release-9h478" STEP: Checking that the Job readopts the Pod Jan 27 23:58:40.348: INFO: Waiting up to 15m0s for pod "adopt-release-9h478" in namespace "job-1891" to be "adopted" Jan 27 23:58:40.384: INFO: Pod "adopt-release-9h478": Phase="Running", Reason="", readiness=true. Elapsed: 35.970078ms Jan 27 23:58:42.391: INFO: Pod "adopt-release-9h478": Phase="Running", Reason="", readiness=true. Elapsed: 2.043034743s Jan 27 23:58:42.392: INFO: Pod "adopt-release-9h478" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jan 27 23:58:42.905: INFO: Successfully updated pod "adopt-release-9h478" STEP: Checking that the Job releases the Pod Jan 27 23:58:42.905: INFO: Waiting up to 15m0s for pod "adopt-release-9h478" in namespace "job-1891" to be "released" Jan 27 23:58:42.922: INFO: Pod "adopt-release-9h478": Phase="Running", Reason="", readiness=true. Elapsed: 16.951865ms Jan 27 23:58:44.930: INFO: Pod "adopt-release-9h478": Phase="Running", Reason="", readiness=true. Elapsed: 2.025110356s Jan 27 23:58:44.931: INFO: Pod "adopt-release-9h478" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:44.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1891" for this suite. • [SLOW TEST:15.276 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":40,"skipped":710,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:44.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 27 23:58:45.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea" in namespace "downward-api-45" to be "success or failure" Jan 27 23:58:45.103: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea": Phase="Pending", Reason="", readiness=false. Elapsed: 38.469987ms Jan 27 23:58:47.112: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047314129s Jan 27 23:58:49.119: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05445993s Jan 27 23:58:51.126: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060983679s Jan 27 23:58:53.131: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066133585s STEP: Saw pod success Jan 27 23:58:53.131: INFO: Pod "downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea" satisfied condition "success or failure" Jan 27 23:58:53.134: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea container client-container: STEP: delete the pod Jan 27 23:58:53.215: INFO: Waiting for pod downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea to disappear Jan 27 23:58:53.226: INFO: Pod downwardapi-volume-11cb60b3-954a-4555-9f9a-606a63511bea no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:58:53.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-45" for this suite. • [SLOW TEST:8.298 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":41,"skipped":717,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:58:53.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod busybox-7be84848-fa81-45c3-86d2-1d617673c388 in namespace container-probe-7520 Jan 27 23:59:01.445: INFO: Started pod busybox-7be84848-fa81-45c3-86d2-1d617673c388 in namespace container-probe-7520 STEP: checking the pod's current state and verifying that restartCount is present Jan 27 23:59:01.449: INFO: Initial restart count of pod busybox-7be84848-fa81-45c3-86d2-1d617673c388 is 0 Jan 27 23:59:49.652: INFO: Restart count of pod container-probe-7520/busybox-7be84848-fa81-45c3-86d2-1d617673c388 is now 1 (48.202591691s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 27 23:59:49.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7520" for this suite. • [SLOW TEST:56.470 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":42,"skipped":721,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 27 23:59:49.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating the pod Jan 27 23:59:58.437: INFO: Successfully updated pod "labelsupdate1174cb69-4e60-4a0e-92e5-fb33e0bbabfb" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:00:00.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9761" for this suite. • [SLOW TEST:10.825 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":43,"skipped":738,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:00:00.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 28 00:00:00.675: INFO: PodSpec: initContainers in spec.initContainers Jan 28 00:01:00.596: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-faac7826-b2b9-4618-937c-cab85277b741", GenerateName:"", Namespace:"init-container-123", SelfLink:"/api/v1/namespaces/init-container-123/pods/pod-init-faac7826-b2b9-4618-937c-cab85277b741", UID:"2b8ad369-5f9c-403e-8953-a464047d6021", ResourceVersion:"4770106", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715766400, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"675721707"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jx7pp", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc000bca0c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jx7pp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jx7pp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jx7pp", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00218c078), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002020660), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00218c1c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00218c200)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00218c208), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00218c20c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766400, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766400, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766400, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766400, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc002428060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026fe070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0026fe0e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://a454112b79675418bd43c9d56acc23a2c73dd4cff57c01c05201c35230aa250e", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024280a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002428080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00218c7af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:01:00.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-123" for this suite. • [SLOW TEST:60.124 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":44,"skipped":742,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:01:00.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-map-64338403-d5a8-4180-922f-8b942271d3c0 STEP: Creating a pod to test consume configMaps Jan 28 00:01:00.741: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2" in namespace "projected-1807" to be "success or failure" Jan 28 00:01:00.744: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680898ms Jan 28 00:01:02.752: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011751051s Jan 28 00:01:04.760: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019106084s Jan 28 00:01:06.764: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023685042s Jan 28 00:01:08.770: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.029500461s STEP: Saw pod success Jan 28 00:01:08.770: INFO: Pod "pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2" satisfied condition "success or failure" Jan 28 00:01:08.775: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2 container projected-configmap-volume-test: STEP: delete the pod Jan 28 00:01:08.914: INFO: Waiting for pod pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2 to disappear Jan 28 00:01:08.920: INFO: Pod pod-projected-configmaps-acccd32f-3f99-4ca9-9f3d-decda0bdd8b2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:01:08.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1807" for this suite. • [SLOW TEST:8.261 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":45,"skipped":756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:01:08.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-2248 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 28 00:01:09.160: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 28 00:01:09.240: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:01:11.882: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:01:13.257: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:01:16.078: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 28 00:01:17.365: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:19.258: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:21.248: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:23.249: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:25.247: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:27.246: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:29.246: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:31.246: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:33.246: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 28 00:01:35.246: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 28 00:01:35.253: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jan 28 00:01:43.872: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-2248 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 00:01:43.872: INFO: >>> kubeConfig: /root/.kube/config I0128 00:01:43.943166 9 log.go:172] (0xc002226790) (0xc0017c9860) Create stream I0128 00:01:43.943383 9 log.go:172] (0xc002226790) (0xc0017c9860) Stream added, broadcasting: 1 I0128 00:01:43.950411 9 log.go:172] (0xc002226790) Reply frame received for 1 I0128 00:01:43.950511 9 log.go:172] (0xc002226790) (0xc001ac0960) Create stream I0128 00:01:43.950535 9 log.go:172] (0xc002226790) (0xc001ac0960) Stream added, broadcasting: 3 I0128 00:01:43.952563 9 log.go:172] (0xc002226790) Reply frame received for 3 I0128 00:01:43.952636 9 log.go:172] (0xc002226790) (0xc001b40000) Create stream I0128 00:01:43.952652 9 log.go:172] (0xc002226790) (0xc001b40000) Stream added, broadcasting: 5 I0128 00:01:43.953968 9 log.go:172] (0xc002226790) Reply frame received for 5 I0128 00:01:44.055911 9 log.go:172] (0xc002226790) Data frame received for 3 I0128 00:01:44.056111 9 log.go:172] (0xc001ac0960) (3) Data frame handling I0128 00:01:44.056159 9 log.go:172] (0xc001ac0960) (3) Data frame sent I0128 00:01:44.141252 9 log.go:172] (0xc002226790) Data frame received for 1 I0128 00:01:44.141356 9 log.go:172] (0xc002226790) (0xc001ac0960) Stream removed, broadcasting: 3 I0128 00:01:44.141445 9 log.go:172] (0xc0017c9860) (1) Data frame handling I0128 00:01:44.141501 9 log.go:172] (0xc002226790) (0xc001b40000) Stream removed, broadcasting: 5 I0128 00:01:44.141552 9 log.go:172] (0xc0017c9860) (1) Data frame sent I0128 00:01:44.141574 9 log.go:172] (0xc002226790) (0xc0017c9860) Stream removed, broadcasting: 1 I0128 00:01:44.141618 9 log.go:172] (0xc002226790) Go away received I0128 00:01:44.142819 9 log.go:172] (0xc002226790) (0xc0017c9860) Stream removed, broadcasting: 1 I0128 00:01:44.142854 9 log.go:172] (0xc002226790) (0xc001ac0960) Stream removed, broadcasting: 3 I0128 00:01:44.142866 9 log.go:172] (0xc002226790) (0xc001b40000) Stream removed, broadcasting: 5 Jan 28 00:01:44.143: INFO: Waiting for responses: map[] Jan 28 00:01:44.148: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-2248 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 28 00:01:44.148: INFO: >>> kubeConfig: /root/.kube/config I0128 00:01:44.188860 9 log.go:172] (0xc002226fd0) (0xc0017c9e00) Create stream I0128 00:01:44.188971 9 log.go:172] (0xc002226fd0) (0xc0017c9e00) Stream added, broadcasting: 1 I0128 00:01:44.195021 9 log.go:172] (0xc002226fd0) Reply frame received for 1 I0128 00:01:44.195080 9 log.go:172] (0xc002226fd0) (0xc0002db540) Create stream I0128 00:01:44.195093 9 log.go:172] (0xc002226fd0) (0xc0002db540) Stream added, broadcasting: 3 I0128 00:01:44.196803 9 log.go:172] (0xc002226fd0) Reply frame received for 3 I0128 00:01:44.196832 9 log.go:172] (0xc002226fd0) (0xc001ac0aa0) Create stream I0128 00:01:44.196840 9 log.go:172] (0xc002226fd0) (0xc001ac0aa0) Stream added, broadcasting: 5 I0128 00:01:44.198349 9 log.go:172] (0xc002226fd0) Reply frame received for 5 I0128 00:01:44.268209 9 log.go:172] (0xc002226fd0) Data frame received for 3 I0128 00:01:44.268449 9 log.go:172] (0xc0002db540) (3) Data frame handling I0128 00:01:44.268483 9 log.go:172] (0xc0002db540) (3) Data frame sent I0128 00:01:44.354294 9 log.go:172] (0xc002226fd0) Data frame received for 1 I0128 00:01:44.354382 9 log.go:172] (0xc002226fd0) (0xc001ac0aa0) Stream removed, broadcasting: 5 I0128 00:01:44.354468 9 log.go:172] (0xc0017c9e00) (1) Data frame handling I0128 00:01:44.354494 9 log.go:172] (0xc0017c9e00) (1) Data frame sent I0128 00:01:44.354525 9 log.go:172] (0xc002226fd0) (0xc0002db540) Stream removed, broadcasting: 3 I0128 00:01:44.354613 9 log.go:172] (0xc002226fd0) (0xc0017c9e00) Stream removed, broadcasting: 1 I0128 00:01:44.354670 9 log.go:172] (0xc002226fd0) Go away received I0128 00:01:44.355107 9 log.go:172] (0xc002226fd0) (0xc0017c9e00) Stream removed, broadcasting: 1 I0128 00:01:44.355131 9 log.go:172] (0xc002226fd0) (0xc0002db540) Stream removed, broadcasting: 3 I0128 00:01:44.355136 9 log.go:172] (0xc002226fd0) (0xc001ac0aa0) Stream removed, broadcasting: 5 Jan 28 00:01:44.355: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:01:44.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2248" for this suite. • [SLOW TEST:35.435 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":46,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:01:44.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-7264/secret-test-7acbb361-589e-4264-9b62-5df1db1e9fd1 STEP: Creating a pod to test consume secrets Jan 28 00:01:44.501: INFO: Waiting up to 5m0s for pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854" in namespace "secrets-7264" to be "success or failure" Jan 28 00:01:44.553: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 52.012727ms Jan 28 00:01:46.563: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061438747s Jan 28 00:01:48.734: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.233082135s Jan 28 00:01:50.871: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 6.369318238s Jan 28 00:01:52.879: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 8.378084252s Jan 28 00:01:54.890: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Pending", Reason="", readiness=false. Elapsed: 10.389043724s Jan 28 00:01:56.897: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.396024083s STEP: Saw pod success Jan 28 00:01:56.897: INFO: Pod "pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854" satisfied condition "success or failure" Jan 28 00:01:56.902: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854 container env-test: STEP: delete the pod Jan 28 00:01:56.950: INFO: Waiting for pod pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854 to disappear Jan 28 00:01:56.958: INFO: Pod pod-configmaps-a45859f2-5556-458b-9afd-f507721d6854 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:01:56.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7264" for this suite. • [SLOW TEST:12.602 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":47,"skipped":814,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:01:56.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jan 28 00:02:05.128: INFO: &Pod{ObjectMeta:{send-events-44be09d7-5312-451c-8b0e-e5c3a48b5a95 events-2593 /api/v1/namespaces/events-2593/pods/send-events-44be09d7-5312-451c-8b0e-e5c3a48b5a95 8c9694e4-1432-4c14-804a-a331f5a1a373 4770380 0 2020-01-28 00:01:57 +0000 UTC map[name:foo time:87717339] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r4rl7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r4rl7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r4rl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:01:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:02:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:02:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:01:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-28 00:01:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:02:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7f8e761841a385dd31f551a83a5a079a69c0a84be3fc7ba1cc33c2cdd6e3d4bf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Jan 28 00:02:07.140: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jan 28 00:02:09.148: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:02:09.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2593" for this suite. • [SLOW TEST:12.224 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":280,"completed":48,"skipped":824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:02:09.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-n6l65 in namespace proxy-5710 I0128 00:02:09.367874 9 runners.go:189] Created replication controller with name: proxy-service-n6l65, namespace: proxy-5710, replica count: 1 I0128 00:02:10.418903 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:11.419498 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:12.419897 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:13.420335 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:14.420879 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:15.421334 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0128 00:02:16.421665 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0128 00:02:17.422047 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0128 00:02:18.422417 9 runners.go:189] proxy-service-n6l65 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 28 00:02:18.426: INFO: setup took 9.112837137s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jan 28 00:02:18.455: INFO: (0) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 27.892828ms) Jan 28 00:02:18.455: INFO: (0) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 27.646539ms) Jan 28 00:02:18.456: INFO: (0) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 28.737784ms) Jan 28 00:02:18.458: INFO: (0) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 31.101787ms) Jan 28 00:02:18.459: INFO: (0) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 32.122835ms) Jan 28 00:02:18.461: INFO: (0) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 34.050117ms) Jan 28 00:02:18.467: INFO: (0) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 40.824822ms) Jan 28 00:02:18.471: INFO: (0) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 44.950214ms) Jan 28 00:02:18.472: INFO: (0) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 45.18661ms) Jan 28 00:02:18.474: INFO: (0) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 47.991606ms) Jan 28 00:02:18.478: INFO: (0) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 51.222533ms) Jan 28 00:02:18.478: INFO: (0) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 50.753608ms) Jan 28 00:02:18.478: INFO: (0) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 50.939863ms) Jan 28 00:02:18.486: INFO: (0) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 58.764192ms) Jan 28 00:02:18.486: INFO: (0) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 58.822993ms) Jan 28 00:02:18.488: INFO: (0) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test (200; 12.906807ms) Jan 28 00:02:18.505: INFO: (1) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 13.397539ms) Jan 28 00:02:18.505: INFO: (1) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 13.863766ms) Jan 28 00:02:18.505: INFO: (1) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 15.701287ms) Jan 28 00:02:18.507: INFO: (1) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 18.098944ms) Jan 28 00:02:18.507: INFO: (1) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 18.196075ms) Jan 28 00:02:18.549: INFO: (1) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 59.283223ms) Jan 28 00:02:18.549: INFO: (1) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 57.547039ms) Jan 28 00:02:18.549: INFO: (1) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 59.540153ms) Jan 28 00:02:18.549: INFO: (1) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 59.270965ms) Jan 28 00:02:18.549: INFO: (1) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 59.399901ms) Jan 28 00:02:18.550: INFO: (1) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 60.391229ms) Jan 28 00:02:18.565: INFO: (2) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 13.729788ms) Jan 28 00:02:18.565: INFO: (2) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 13.743381ms) Jan 28 00:02:18.565: INFO: (2) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 13.606896ms) Jan 28 00:02:18.565: INFO: (2) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 14.242974ms) Jan 28 00:02:18.565: INFO: (2) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 12.920499ms) Jan 28 00:02:18.566: INFO: (2) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 14.202237ms) Jan 28 00:02:18.566: INFO: (2) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 14.74771ms) Jan 28 00:02:18.567: INFO: (2) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 14.744567ms) Jan 28 00:02:18.567: INFO: (2) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 14.882224ms) Jan 28 00:02:18.570: INFO: (2) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 18.438061ms) Jan 28 00:02:18.570: INFO: (2) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 17.948466ms) Jan 28 00:02:18.570: INFO: (2) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 18.112867ms) Jan 28 00:02:18.570: INFO: (2) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 18.78545ms) Jan 28 00:02:18.570: INFO: (2) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 18.291671ms) Jan 28 00:02:18.571: INFO: (2) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 19.224338ms) Jan 28 00:02:18.584: INFO: (3) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 12.816584ms) Jan 28 00:02:18.586: INFO: (3) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 14.321245ms) Jan 28 00:02:18.586: INFO: (3) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 14.355962ms) Jan 28 00:02:18.586: INFO: (3) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 14.336455ms) Jan 28 00:02:18.586: INFO: (3) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 14.635423ms) Jan 28 00:02:18.586: INFO: (3) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 15.143863ms) Jan 28 00:02:18.587: INFO: (3) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 15.758871ms) Jan 28 00:02:18.587: INFO: (3) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 17.999228ms) Jan 28 00:02:18.590: INFO: (3) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 18.045791ms) Jan 28 00:02:18.591: INFO: (3) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 19.21999ms) Jan 28 00:02:18.591: INFO: (3) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 19.785428ms) Jan 28 00:02:18.592: INFO: (3) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 20.264628ms) Jan 28 00:02:18.593: INFO: (3) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 21.901371ms) Jan 28 00:02:18.599: INFO: (4) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 5.390024ms) Jan 28 00:02:18.604: INFO: (4) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 10.83634ms) Jan 28 00:02:18.608: INFO: (4) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 15.058421ms) Jan 28 00:02:18.609: INFO: (4) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 14.958988ms) Jan 28 00:02:18.609: INFO: (4) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 14.981209ms) Jan 28 00:02:18.611: INFO: (4) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 17.162116ms) Jan 28 00:02:18.611: INFO: (4) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 17.65105ms) Jan 28 00:02:18.611: INFO: (4) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 17.652184ms) Jan 28 00:02:18.612: INFO: (4) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 18.261237ms) Jan 28 00:02:18.612: INFO: (4) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 18.551544ms) Jan 28 00:02:18.613: INFO: (4) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test (200; 22.900382ms) Jan 28 00:02:18.622: INFO: (5) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 4.910163ms) Jan 28 00:02:18.622: INFO: (5) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 5.368045ms) Jan 28 00:02:18.623: INFO: (5) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 6.588246ms) Jan 28 00:02:18.624: INFO: (5) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 6.888101ms) Jan 28 00:02:18.631: INFO: (5) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 14.449225ms) Jan 28 00:02:18.632: INFO: (5) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 14.581991ms) Jan 28 00:02:18.632: INFO: (5) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 14.684733ms) Jan 28 00:02:18.632: INFO: (5) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 15.074257ms) Jan 28 00:02:18.632: INFO: (5) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 14.761867ms) Jan 28 00:02:18.632: INFO: (5) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 5.181147ms) Jan 28 00:02:18.641: INFO: (6) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 5.820211ms) Jan 28 00:02:18.647: INFO: (6) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 16.036848ms) Jan 28 00:02:18.652: INFO: (6) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 16.799974ms) Jan 28 00:02:18.652: INFO: (6) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 17.092353ms) Jan 28 00:02:18.652: INFO: (6) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 17.136653ms) Jan 28 00:02:18.655: INFO: (6) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 19.307291ms) Jan 28 00:02:18.655: INFO: (6) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 19.486515ms) Jan 28 00:02:18.667: INFO: (7) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 11.941948ms) Jan 28 00:02:18.667: INFO: (7) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 12.060316ms) Jan 28 00:02:18.667: INFO: (7) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 11.834387ms) Jan 28 00:02:18.668: INFO: (7) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 12.562495ms) Jan 28 00:02:18.668: INFO: (7) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 12.639816ms) Jan 28 00:02:18.668: INFO: (7) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 13.331972ms) Jan 28 00:02:18.668: INFO: (7) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 13.53181ms) Jan 28 00:02:18.670: INFO: (7) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 15.392171ms) Jan 28 00:02:18.672: INFO: (7) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 16.938074ms) Jan 28 00:02:18.672: INFO: (7) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 17.409808ms) Jan 28 00:02:18.672: INFO: (7) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 17.548732ms) Jan 28 00:02:18.673: INFO: (7) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 17.768894ms) Jan 28 00:02:18.673: INFO: (7) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 17.615489ms) Jan 28 00:02:18.673: INFO: (7) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 17.670216ms) Jan 28 00:02:18.673: INFO: (7) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 18.159425ms) Jan 28 00:02:18.674: INFO: (7) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 7.279917ms) Jan 28 00:02:18.682: INFO: (8) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test (200; 13.606837ms) Jan 28 00:02:18.688: INFO: (8) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 13.751282ms) Jan 28 00:02:18.688: INFO: (8) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 13.737692ms) Jan 28 00:02:18.688: INFO: (8) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 13.963446ms) Jan 28 00:02:18.689: INFO: (8) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 13.725402ms) Jan 28 00:02:18.689: INFO: (8) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 13.807258ms) Jan 28 00:02:18.690: INFO: (8) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 15.114288ms) Jan 28 00:02:18.690: INFO: (8) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 14.806803ms) Jan 28 00:02:18.690: INFO: (8) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 15.023614ms) Jan 28 00:02:18.690: INFO: (8) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 15.810916ms) Jan 28 00:02:18.697: INFO: (9) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 6.335045ms) Jan 28 00:02:18.699: INFO: (9) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 8.629578ms) Jan 28 00:02:18.699: INFO: (9) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 8.668394ms) Jan 28 00:02:18.699: INFO: (9) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 8.536128ms) Jan 28 00:02:18.699: INFO: (9) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 8.657426ms) Jan 28 00:02:18.699: INFO: (9) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 9.012881ms) Jan 28 00:02:18.701: INFO: (9) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 10.081271ms) Jan 28 00:02:18.703: INFO: (9) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 12.449364ms) Jan 28 00:02:18.704: INFO: (9) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 13.185993ms) Jan 28 00:02:18.704: INFO: (9) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 13.873985ms) Jan 28 00:02:18.705: INFO: (9) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 14.71874ms) Jan 28 00:02:18.712: INFO: (10) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 6.744024ms) Jan 28 00:02:18.712: INFO: (10) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 6.88396ms) Jan 28 00:02:18.713: INFO: (10) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 7.628456ms) Jan 28 00:02:18.713: INFO: (10) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 7.832112ms) Jan 28 00:02:18.713: INFO: (10) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 7.992962ms) Jan 28 00:02:18.714: INFO: (10) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 8.240633ms) Jan 28 00:02:18.714: INFO: (10) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 8.388945ms) Jan 28 00:02:18.714: INFO: (10) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 8.569776ms) Jan 28 00:02:18.714: INFO: (10) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 9.296476ms) Jan 28 00:02:18.715: INFO: (10) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 7.223945ms) Jan 28 00:02:18.725: INFO: (11) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 8.253071ms) Jan 28 00:02:18.726: INFO: (11) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 8.452892ms) Jan 28 00:02:18.726: INFO: (11) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 9.246986ms) Jan 28 00:02:18.727: INFO: (11) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 10.258592ms) Jan 28 00:02:18.727: INFO: (11) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 10.214828ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 10.604663ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 10.595598ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 10.712351ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 10.798543ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 10.942558ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 11.065912ms) Jan 28 00:02:18.728: INFO: (11) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 11.2254ms) Jan 28 00:02:18.729: INFO: (11) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 11.666287ms) Jan 28 00:02:18.729: INFO: (11) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 11.659606ms) Jan 28 00:02:18.739: INFO: (12) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 10.382025ms) Jan 28 00:02:18.740: INFO: (12) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 10.780395ms) Jan 28 00:02:18.740: INFO: (12) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 11.043495ms) Jan 28 00:02:18.743: INFO: (12) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 14.237095ms) Jan 28 00:02:18.743: INFO: (12) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 14.160374ms) Jan 28 00:02:18.743: INFO: (12) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 14.263444ms) Jan 28 00:02:18.743: INFO: (12) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 14.245962ms) Jan 28 00:02:18.743: INFO: (12) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 14.298845ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 14.924083ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 15.14304ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 15.181154ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 15.306316ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 15.355183ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 15.329583ms) Jan 28 00:02:18.744: INFO: (12) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 6.704363ms) Jan 28 00:02:18.752: INFO: (13) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 7.133693ms) Jan 28 00:02:18.752: INFO: (13) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 7.022814ms) Jan 28 00:02:18.752: INFO: (13) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 7.151793ms) Jan 28 00:02:18.752: INFO: (13) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 7.524065ms) Jan 28 00:02:18.753: INFO: (13) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 8.496665ms) Jan 28 00:02:18.755: INFO: (13) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 10.351675ms) Jan 28 00:02:18.757: INFO: (13) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 12.288203ms) Jan 28 00:02:18.757: INFO: (13) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 12.303996ms) Jan 28 00:02:18.758: INFO: (13) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 13.022347ms) Jan 28 00:02:18.758: INFO: (13) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 13.034753ms) Jan 28 00:02:18.758: INFO: (13) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 13.242153ms) Jan 28 00:02:18.760: INFO: (13) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 15.619302ms) Jan 28 00:02:18.769: INFO: (14) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 8.54494ms) Jan 28 00:02:18.769: INFO: (14) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 8.302843ms) Jan 28 00:02:18.769: INFO: (14) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 8.54885ms) Jan 28 00:02:18.769: INFO: (14) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 8.665961ms) Jan 28 00:02:18.769: INFO: (14) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 8.190756ms) Jan 28 00:02:18.770: INFO: (14) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test (200; 9.58498ms) Jan 28 00:02:18.770: INFO: (14) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 9.427528ms) Jan 28 00:02:18.771: INFO: (14) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 9.922426ms) Jan 28 00:02:18.771: INFO: (14) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 10.463309ms) Jan 28 00:02:18.773: INFO: (14) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 12.628968ms) Jan 28 00:02:18.773: INFO: (14) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 12.433348ms) Jan 28 00:02:18.773: INFO: (14) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 12.53445ms) Jan 28 00:02:18.774: INFO: (14) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 12.872305ms) Jan 28 00:02:18.774: INFO: (14) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 13.376709ms) Jan 28 00:02:18.775: INFO: (14) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 14.46739ms) Jan 28 00:02:18.784: INFO: (15) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 8.39776ms) Jan 28 00:02:18.788: INFO: (15) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 15.733852ms) Jan 28 00:02:18.791: INFO: (15) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 15.950411ms) Jan 28 00:02:18.791: INFO: (15) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 15.876931ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 15.847222ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 15.772123ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 16.01736ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 16.111791ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 15.946249ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 16.031559ms) Jan 28 00:02:18.792: INFO: (15) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 15.908688ms) Jan 28 00:02:18.796: INFO: (16) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 4.037581ms) Jan 28 00:02:18.799: INFO: (16) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 6.448796ms) Jan 28 00:02:18.799: INFO: (16) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 6.641942ms) Jan 28 00:02:18.800: INFO: (16) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 7.507117ms) Jan 28 00:02:18.800: INFO: (16) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 7.502837ms) Jan 28 00:02:18.800: INFO: (16) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 8.003401ms) Jan 28 00:02:18.800: INFO: (16) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 7.746589ms) Jan 28 00:02:18.800: INFO: (16) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 8.082251ms) Jan 28 00:02:18.801: INFO: (16) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 8.515383ms) Jan 28 00:02:18.801: INFO: (16) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 9.033472ms) Jan 28 00:02:18.802: INFO: (16) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 10.015467ms) Jan 28 00:02:18.802: INFO: (16) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 10.321536ms) Jan 28 00:02:18.802: INFO: (16) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 10.43028ms) Jan 28 00:02:18.802: INFO: (16) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 10.319203ms) Jan 28 00:02:18.803: INFO: (16) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 11.50638ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 13.483892ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 13.564514ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 13.515932ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 13.715169ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 13.443065ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 13.520554ms) Jan 28 00:02:18.817: INFO: (17) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 13.688396ms) Jan 28 00:02:18.818: INFO: (17) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 14.463986ms) Jan 28 00:02:18.818: INFO: (17) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 14.922313ms) Jan 28 00:02:18.818: INFO: (17) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 14.902989ms) Jan 28 00:02:18.823: INFO: (17) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 18.988684ms) Jan 28 00:02:18.823: INFO: (17) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 19.132624ms) Jan 28 00:02:18.823: INFO: (17) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 19.320758ms) Jan 28 00:02:18.823: INFO: (17) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 19.339356ms) Jan 28 00:02:18.823: INFO: (17) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 19.538447ms) Jan 28 00:02:18.831: INFO: (18) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 7.221194ms) Jan 28 00:02:18.831: INFO: (18) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 7.345776ms) Jan 28 00:02:18.835: INFO: (18) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 11.181928ms) Jan 28 00:02:18.835: INFO: (18) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:1080/proxy/: ... (200; 11.388036ms) Jan 28 00:02:18.835: INFO: (18) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 11.612386ms) Jan 28 00:02:18.835: INFO: (18) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 11.513577ms) Jan 28 00:02:18.836: INFO: (18) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:1080/proxy/: test<... (200; 12.377012ms) Jan 28 00:02:18.836: INFO: (18) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc:162/proxy/: bar (200; 12.85369ms) Jan 28 00:02:18.836: INFO: (18) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 12.704302ms) Jan 28 00:02:18.836: INFO: (18) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:462/proxy/: tls qux (200; 13.19286ms) Jan 28 00:02:18.836: INFO: (18) /api/v1/namespaces/proxy-5710/pods/http:proxy-service-n6l65-ml7lc:160/proxy/: foo (200; 12.966814ms) Jan 28 00:02:18.838: INFO: (18) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: test<... (200; 5.818843ms) Jan 28 00:02:18.844: INFO: (19) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:443/proxy/: ... (200; 51.035428ms) Jan 28 00:02:18.890: INFO: (19) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname1/proxy/: foo (200; 50.986076ms) Jan 28 00:02:18.890: INFO: (19) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname1/proxy/: tls baz (200; 51.172675ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/services/https:proxy-service-n6l65:tlsportname2/proxy/: tls qux (200; 52.30145ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname1/proxy/: foo (200; 52.298404ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/services/http:proxy-service-n6l65:portname2/proxy/: bar (200; 52.710132ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/services/proxy-service-n6l65:portname2/proxy/: bar (200; 52.502352ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/pods/proxy-service-n6l65-ml7lc/proxy/: test (200; 52.637945ms) Jan 28 00:02:18.891: INFO: (19) /api/v1/namespaces/proxy-5710/pods/https:proxy-service-n6l65-ml7lc:460/proxy/: tls baz (200; 52.722466ms) STEP: deleting ReplicationController proxy-service-n6l65 in namespace proxy-5710, will wait for the garbage collector to delete the pods Jan 28 00:02:18.951: INFO: Deleting ReplicationController proxy-service-n6l65 took: 5.725399ms Jan 28 00:02:19.252: INFO: Terminating ReplicationController proxy-service-n6l65 pods took: 300.426744ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:02:32.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5710" for this suite. • [SLOW TEST:23.272 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":280,"completed":49,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:02:32.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:02:32.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1421" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":50,"skipped":870,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:02:32.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-map-a9ea16a8-ae9d-441e-a4bf-e877b83ad0e1 STEP: Creating a pod to test consume configMaps Jan 28 00:02:33.080: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0" in namespace "configmap-3070" to be "success or failure" Jan 28 00:02:33.099: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.033416ms Jan 28 00:02:35.105: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024946353s Jan 28 00:02:37.110: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030551896s Jan 28 00:02:39.115: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035705255s Jan 28 00:02:41.123: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.043217026s STEP: Saw pod success Jan 28 00:02:41.123: INFO: Pod "pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0" satisfied condition "success or failure" Jan 28 00:02:41.127: INFO: Trying to get logs from node jerma-node pod pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0 container configmap-volume-test: STEP: delete the pod Jan 28 00:02:41.262: INFO: Waiting for pod pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0 to disappear Jan 28 00:02:41.277: INFO: Pod pod-configmaps-4fad02e7-86df-488c-9c8a-fe8abfe9c4b0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:02:41.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3070" for this suite. • [SLOW TEST:8.398 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":51,"skipped":885,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:02:41.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 28 00:02:41.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6" in namespace "projected-7988" to be "success or failure" Jan 28 00:02:41.681: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 169.1721ms Jan 28 00:02:43.687: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175032137s Jan 28 00:02:45.692: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180141374s Jan 28 00:02:47.710: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198135268s Jan 28 00:02:49.718: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205410177s STEP: Saw pod success Jan 28 00:02:49.718: INFO: Pod "downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6" satisfied condition "success or failure" Jan 28 00:02:49.722: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6 container client-container: STEP: delete the pod Jan 28 00:02:49.779: INFO: Waiting for pod downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6 to disappear Jan 28 00:02:49.793: INFO: Pod downwardapi-volume-64bbf624-6c75-4199-ad85-dcd338444ce6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:02:49.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7988" for this suite. • [SLOW TEST:8.593 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":52,"skipped":897,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:02:49.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8230 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8230 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8230 Jan 28 00:02:50.315: INFO: Found 0 stateful pods, waiting for 1 Jan 28 00:03:00.437: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 28 00:03:00.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:03:03.018: INFO: stderr: "I0128 00:03:02.745477 1162 log.go:172] (0xc00002f4a0) (0xc0007f4000) Create stream\nI0128 00:03:02.745613 1162 log.go:172] (0xc00002f4a0) (0xc0007f4000) Stream added, broadcasting: 1\nI0128 00:03:02.749873 1162 log.go:172] (0xc00002f4a0) Reply frame received for 1\nI0128 00:03:02.749930 1162 log.go:172] (0xc00002f4a0) (0xc0007f4280) Create stream\nI0128 00:03:02.749945 1162 log.go:172] (0xc00002f4a0) (0xc0007f4280) Stream added, broadcasting: 3\nI0128 00:03:02.751603 1162 log.go:172] (0xc00002f4a0) Reply frame received for 3\nI0128 00:03:02.751624 1162 log.go:172] (0xc00002f4a0) (0xc000671e00) Create stream\nI0128 00:03:02.751633 1162 log.go:172] (0xc00002f4a0) (0xc000671e00) Stream added, broadcasting: 5\nI0128 00:03:02.753923 1162 log.go:172] (0xc00002f4a0) Reply frame received for 5\nI0128 00:03:02.865015 1162 log.go:172] (0xc00002f4a0) Data frame received for 5\nI0128 00:03:02.865151 1162 log.go:172] (0xc000671e00) (5) Data frame handling\nI0128 00:03:02.865197 1162 log.go:172] (0xc000671e00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:03:02.894939 1162 log.go:172] (0xc00002f4a0) Data frame received for 3\nI0128 00:03:02.895058 1162 log.go:172] (0xc0007f4280) (3) Data frame handling\nI0128 00:03:02.895107 1162 log.go:172] (0xc0007f4280) (3) Data frame sent\nI0128 00:03:02.997310 1162 log.go:172] (0xc00002f4a0) Data frame received for 1\nI0128 00:03:02.997515 1162 log.go:172] (0xc00002f4a0) (0xc0007f4280) Stream removed, broadcasting: 3\nI0128 00:03:02.997761 1162 log.go:172] (0xc00002f4a0) (0xc000671e00) Stream removed, broadcasting: 5\nI0128 00:03:02.997881 1162 log.go:172] (0xc0007f4000) (1) Data frame handling\nI0128 00:03:02.997940 1162 log.go:172] (0xc0007f4000) (1) Data frame sent\nI0128 00:03:02.997959 1162 log.go:172] (0xc00002f4a0) (0xc0007f4000) Stream removed, broadcasting: 1\nI0128 00:03:02.998809 1162 log.go:172] (0xc00002f4a0) Go away received\nI0128 00:03:02.999697 1162 log.go:172] (0xc00002f4a0) (0xc0007f4000) Stream removed, broadcasting: 1\nI0128 00:03:02.999714 1162 log.go:172] (0xc00002f4a0) (0xc0007f4280) Stream removed, broadcasting: 3\nI0128 00:03:02.999718 1162 log.go:172] (0xc00002f4a0) (0xc000671e00) Stream removed, broadcasting: 5\n" Jan 28 00:03:03.018: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:03:03.018: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:03:03.024: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 28 00:03:13.030: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:03:13.030: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:03:13.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999763s Jan 28 00:03:14.059: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985180591s Jan 28 00:03:15.066: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.978006871s Jan 28 00:03:16.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.971156063s Jan 28 00:03:17.080: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.963173759s Jan 28 00:03:18.086: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.957072645s Jan 28 00:03:19.094: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.951288397s Jan 28 00:03:20.101: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.943444699s Jan 28 00:03:21.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.93563397s Jan 28 00:03:22.124: INFO: Verifying statefulset ss doesn't scale past 1 for another 928.250539ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8230 Jan 28 00:03:23.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:03:23.479: INFO: stderr: "I0128 00:03:23.295284 1192 log.go:172] (0xc0001074a0) (0xc00069fea0) Create stream\nI0128 00:03:23.295826 1192 log.go:172] (0xc0001074a0) (0xc00069fea0) Stream added, broadcasting: 1\nI0128 00:03:23.300429 1192 log.go:172] (0xc0001074a0) Reply frame received for 1\nI0128 00:03:23.300574 1192 log.go:172] (0xc0001074a0) (0xc00059e780) Create stream\nI0128 00:03:23.300585 1192 log.go:172] (0xc0001074a0) (0xc00059e780) Stream added, broadcasting: 3\nI0128 00:03:23.302499 1192 log.go:172] (0xc0001074a0) Reply frame received for 3\nI0128 00:03:23.302525 1192 log.go:172] (0xc0001074a0) (0xc00037b400) Create stream\nI0128 00:03:23.302535 1192 log.go:172] (0xc0001074a0) (0xc00037b400) Stream added, broadcasting: 5\nI0128 00:03:23.303861 1192 log.go:172] (0xc0001074a0) Reply frame received for 5\nI0128 00:03:23.369198 1192 log.go:172] (0xc0001074a0) Data frame received for 3\nI0128 00:03:23.369305 1192 log.go:172] (0xc00059e780) (3) Data frame handling\nI0128 00:03:23.369321 1192 log.go:172] (0xc00059e780) (3) Data frame sent\nI0128 00:03:23.369359 1192 log.go:172] (0xc0001074a0) Data frame received for 5\nI0128 00:03:23.369367 1192 log.go:172] (0xc00037b400) (5) Data frame handling\nI0128 00:03:23.369378 1192 log.go:172] (0xc00037b400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:03:23.460910 1192 log.go:172] (0xc0001074a0) (0xc00059e780) Stream removed, broadcasting: 3\nI0128 00:03:23.461203 1192 log.go:172] (0xc0001074a0) Data frame received for 1\nI0128 00:03:23.461275 1192 log.go:172] (0xc0001074a0) (0xc00037b400) Stream removed, broadcasting: 5\nI0128 00:03:23.461317 1192 log.go:172] (0xc00069fea0) (1) Data frame handling\nI0128 00:03:23.461367 1192 log.go:172] (0xc00069fea0) (1) Data frame sent\nI0128 00:03:23.461388 1192 log.go:172] (0xc0001074a0) (0xc00069fea0) Stream removed, broadcasting: 1\nI0128 00:03:23.461414 1192 log.go:172] (0xc0001074a0) Go away received\nI0128 00:03:23.462721 1192 log.go:172] (0xc0001074a0) (0xc00069fea0) Stream removed, broadcasting: 1\nI0128 00:03:23.462743 1192 log.go:172] (0xc0001074a0) (0xc00059e780) Stream removed, broadcasting: 3\nI0128 00:03:23.462748 1192 log.go:172] (0xc0001074a0) (0xc00037b400) Stream removed, broadcasting: 5\n" Jan 28 00:03:23.479: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:03:23.479: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:03:23.487: INFO: Found 1 stateful pods, waiting for 3 Jan 28 00:03:33.498: INFO: Found 2 stateful pods, waiting for 3 Jan 28 00:03:43.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 28 00:03:43.495: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 28 00:03:43.495: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jan 28 00:03:43.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:03:44.048: INFO: stderr: "I0128 00:03:43.815285 1213 log.go:172] (0xc0009c8dc0) (0xc0009840a0) Create stream\nI0128 00:03:43.815705 1213 log.go:172] (0xc0009c8dc0) (0xc0009840a0) Stream added, broadcasting: 1\nI0128 00:03:43.819896 1213 log.go:172] (0xc0009c8dc0) Reply frame received for 1\nI0128 00:03:43.820081 1213 log.go:172] (0xc0009c8dc0) (0xc000984140) Create stream\nI0128 00:03:43.820150 1213 log.go:172] (0xc0009c8dc0) (0xc000984140) Stream added, broadcasting: 3\nI0128 00:03:43.822919 1213 log.go:172] (0xc0009c8dc0) Reply frame received for 3\nI0128 00:03:43.822963 1213 log.go:172] (0xc0009c8dc0) (0xc000986000) Create stream\nI0128 00:03:43.822976 1213 log.go:172] (0xc0009c8dc0) (0xc000986000) Stream added, broadcasting: 5\nI0128 00:03:43.824435 1213 log.go:172] (0xc0009c8dc0) Reply frame received for 5\nI0128 00:03:43.927508 1213 log.go:172] (0xc0009c8dc0) Data frame received for 5\nI0128 00:03:43.927785 1213 log.go:172] (0xc000986000) (5) Data frame handling\nI0128 00:03:43.927880 1213 log.go:172] (0xc000986000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:03:43.929748 1213 log.go:172] (0xc0009c8dc0) Data frame received for 3\nI0128 00:03:43.929770 1213 log.go:172] (0xc000984140) (3) Data frame handling\nI0128 00:03:43.929790 1213 log.go:172] (0xc000984140) (3) Data frame sent\nI0128 00:03:44.029627 1213 log.go:172] (0xc0009c8dc0) (0xc000986000) Stream removed, broadcasting: 5\nI0128 00:03:44.029969 1213 log.go:172] (0xc0009c8dc0) Data frame received for 1\nI0128 00:03:44.030139 1213 log.go:172] (0xc0009c8dc0) (0xc000984140) Stream removed, broadcasting: 3\nI0128 00:03:44.030268 1213 log.go:172] (0xc0009840a0) (1) Data frame handling\nI0128 00:03:44.030300 1213 log.go:172] (0xc0009840a0) (1) Data frame sent\nI0128 00:03:44.030313 1213 log.go:172] (0xc0009c8dc0) (0xc0009840a0) Stream removed, broadcasting: 1\nI0128 00:03:44.030352 1213 log.go:172] (0xc0009c8dc0) Go away received\nI0128 00:03:44.031924 1213 log.go:172] (0xc0009c8dc0) (0xc0009840a0) Stream removed, broadcasting: 1\nI0128 00:03:44.031943 1213 log.go:172] (0xc0009c8dc0) (0xc000984140) Stream removed, broadcasting: 3\nI0128 00:03:44.031952 1213 log.go:172] (0xc0009c8dc0) (0xc000986000) Stream removed, broadcasting: 5\n" Jan 28 00:03:44.049: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:03:44.049: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:03:44.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:03:44.743: INFO: stderr: "I0128 00:03:44.335417 1233 log.go:172] (0xc0009ce000) (0xc00074a000) Create stream\nI0128 00:03:44.335985 1233 log.go:172] (0xc0009ce000) (0xc00074a000) Stream added, broadcasting: 1\nI0128 00:03:44.350498 1233 log.go:172] (0xc0009ce000) Reply frame received for 1\nI0128 00:03:44.350764 1233 log.go:172] (0xc0009ce000) (0xc000815cc0) Create stream\nI0128 00:03:44.350797 1233 log.go:172] (0xc0009ce000) (0xc000815cc0) Stream added, broadcasting: 3\nI0128 00:03:44.352506 1233 log.go:172] (0xc0009ce000) Reply frame received for 3\nI0128 00:03:44.352538 1233 log.go:172] (0xc0009ce000) (0xc00070c320) Create stream\nI0128 00:03:44.352554 1233 log.go:172] (0xc0009ce000) (0xc00070c320) Stream added, broadcasting: 5\nI0128 00:03:44.353845 1233 log.go:172] (0xc0009ce000) Reply frame received for 5\nI0128 00:03:44.493518 1233 log.go:172] (0xc0009ce000) Data frame received for 5\nI0128 00:03:44.493739 1233 log.go:172] (0xc00070c320) (5) Data frame handling\nI0128 00:03:44.493809 1233 log.go:172] (0xc00070c320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:03:44.535362 1233 log.go:172] (0xc0009ce000) Data frame received for 3\nI0128 00:03:44.535510 1233 log.go:172] (0xc000815cc0) (3) Data frame handling\nI0128 00:03:44.535539 1233 log.go:172] (0xc000815cc0) (3) Data frame sent\nI0128 00:03:44.711113 1233 log.go:172] (0xc0009ce000) (0xc000815cc0) Stream removed, broadcasting: 3\nI0128 00:03:44.711619 1233 log.go:172] (0xc0009ce000) Data frame received for 1\nI0128 00:03:44.711644 1233 log.go:172] (0xc00074a000) (1) Data frame handling\nI0128 00:03:44.711667 1233 log.go:172] (0xc00074a000) (1) Data frame sent\nI0128 00:03:44.711850 1233 log.go:172] (0xc0009ce000) (0xc00070c320) Stream removed, broadcasting: 5\nI0128 00:03:44.711939 1233 log.go:172] (0xc0009ce000) (0xc00074a000) Stream removed, broadcasting: 1\nI0128 00:03:44.711965 1233 log.go:172] (0xc0009ce000) Go away received\nI0128 00:03:44.713976 1233 log.go:172] (0xc0009ce000) (0xc00074a000) Stream removed, broadcasting: 1\nI0128 00:03:44.714011 1233 log.go:172] (0xc0009ce000) (0xc000815cc0) Stream removed, broadcasting: 3\nI0128 00:03:44.714022 1233 log.go:172] (0xc0009ce000) (0xc00070c320) Stream removed, broadcasting: 5\n" Jan 28 00:03:44.744: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:03:44.744: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:03:44.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 28 00:03:45.164: INFO: stderr: "I0128 00:03:44.979433 1253 log.go:172] (0xc0001054a0) (0xc00062fae0) Create stream\nI0128 00:03:44.979751 1253 log.go:172] (0xc0001054a0) (0xc00062fae0) Stream added, broadcasting: 1\nI0128 00:03:44.982487 1253 log.go:172] (0xc0001054a0) Reply frame received for 1\nI0128 00:03:44.982514 1253 log.go:172] (0xc0001054a0) (0xc0009e6000) Create stream\nI0128 00:03:44.982522 1253 log.go:172] (0xc0001054a0) (0xc0009e6000) Stream added, broadcasting: 3\nI0128 00:03:44.983553 1253 log.go:172] (0xc0001054a0) Reply frame received for 3\nI0128 00:03:44.983575 1253 log.go:172] (0xc0001054a0) (0xc0009d0000) Create stream\nI0128 00:03:44.983583 1253 log.go:172] (0xc0001054a0) (0xc0009d0000) Stream added, broadcasting: 5\nI0128 00:03:44.984423 1253 log.go:172] (0xc0001054a0) Reply frame received for 5\nI0128 00:03:45.048545 1253 log.go:172] (0xc0001054a0) Data frame received for 5\nI0128 00:03:45.048727 1253 log.go:172] (0xc0009d0000) (5) Data frame handling\nI0128 00:03:45.048773 1253 log.go:172] (0xc0009d0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:03:45.086566 1253 log.go:172] (0xc0001054a0) Data frame received for 3\nI0128 00:03:45.086621 1253 log.go:172] (0xc0009e6000) (3) Data frame handling\nI0128 00:03:45.086639 1253 log.go:172] (0xc0009e6000) (3) Data frame sent\nI0128 00:03:45.149993 1253 log.go:172] (0xc0001054a0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0128 00:03:45.150156 1253 log.go:172] (0xc0001054a0) Data frame received for 1\nI0128 00:03:45.150202 1253 log.go:172] (0xc00062fae0) (1) Data frame handling\nI0128 00:03:45.150236 1253 log.go:172] (0xc00062fae0) (1) Data frame sent\nI0128 00:03:45.150276 1253 log.go:172] (0xc0001054a0) (0xc00062fae0) Stream removed, broadcasting: 1\nI0128 00:03:45.150337 1253 log.go:172] (0xc0001054a0) (0xc0009d0000) Stream removed, broadcasting: 5\nI0128 00:03:45.150392 1253 log.go:172] (0xc0001054a0) Go away received\nI0128 00:03:45.151368 1253 log.go:172] (0xc0001054a0) (0xc00062fae0) Stream removed, broadcasting: 1\nI0128 00:03:45.151387 1253 log.go:172] (0xc0001054a0) (0xc0009e6000) Stream removed, broadcasting: 3\nI0128 00:03:45.151403 1253 log.go:172] (0xc0001054a0) (0xc0009d0000) Stream removed, broadcasting: 5\n" Jan 28 00:03:45.164: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 28 00:03:45.164: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 28 00:03:45.164: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:03:45.172: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 28 00:03:55.187: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:03:55.187: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:03:55.187: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 28 00:03:55.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999378s Jan 28 00:03:56.217: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989274545s Jan 28 00:03:57.224: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.982397218s Jan 28 00:03:58.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.975336073s Jan 28 00:03:59.238: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96798122s Jan 28 00:04:00.248: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960433324s Jan 28 00:04:01.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.950658885s Jan 28 00:04:02.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.939963739s Jan 28 00:04:03.276: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93021624s Jan 28 00:04:04.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 923.389669ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8230 Jan 28 00:04:05.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:04:05.695: INFO: stderr: "I0128 00:04:05.516214 1273 log.go:172] (0xc0009e80b0) (0xc000802500) Create stream\nI0128 00:04:05.516464 1273 log.go:172] (0xc0009e80b0) (0xc000802500) Stream added, broadcasting: 1\nI0128 00:04:05.519223 1273 log.go:172] (0xc0009e80b0) Reply frame received for 1\nI0128 00:04:05.519266 1273 log.go:172] (0xc0009e80b0) (0xc0009b6000) Create stream\nI0128 00:04:05.519275 1273 log.go:172] (0xc0009e80b0) (0xc0009b6000) Stream added, broadcasting: 3\nI0128 00:04:05.520414 1273 log.go:172] (0xc0009e80b0) Reply frame received for 3\nI0128 00:04:05.520438 1273 log.go:172] (0xc0009e80b0) (0xc0005dbc20) Create stream\nI0128 00:04:05.520448 1273 log.go:172] (0xc0009e80b0) (0xc0005dbc20) Stream added, broadcasting: 5\nI0128 00:04:05.524492 1273 log.go:172] (0xc0009e80b0) Reply frame received for 5\nI0128 00:04:05.598264 1273 log.go:172] (0xc0009e80b0) Data frame received for 3\nI0128 00:04:05.598336 1273 log.go:172] (0xc0009b6000) (3) Data frame handling\nI0128 00:04:05.598357 1273 log.go:172] (0xc0009b6000) (3) Data frame sent\nI0128 00:04:05.598423 1273 log.go:172] (0xc0009e80b0) Data frame received for 5\nI0128 00:04:05.598446 1273 log.go:172] (0xc0005dbc20) (5) Data frame handling\nI0128 00:04:05.598476 1273 log.go:172] (0xc0005dbc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:04:05.682637 1273 log.go:172] (0xc0009e80b0) Data frame received for 1\nI0128 00:04:05.682746 1273 log.go:172] (0xc0009e80b0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0128 00:04:05.682851 1273 log.go:172] (0xc000802500) (1) Data frame handling\nI0128 00:04:05.682894 1273 log.go:172] (0xc000802500) (1) Data frame sent\nI0128 00:04:05.682923 1273 log.go:172] (0xc0009e80b0) (0xc0005dbc20) Stream removed, broadcasting: 5\nI0128 00:04:05.682946 1273 log.go:172] (0xc0009e80b0) (0xc000802500) Stream removed, broadcasting: 1\nI0128 00:04:05.682971 1273 log.go:172] (0xc0009e80b0) Go away received\nI0128 00:04:05.684046 1273 log.go:172] (0xc0009e80b0) (0xc000802500) Stream removed, broadcasting: 1\nI0128 00:04:05.684062 1273 log.go:172] (0xc0009e80b0) (0xc0009b6000) Stream removed, broadcasting: 3\nI0128 00:04:05.684068 1273 log.go:172] (0xc0009e80b0) (0xc0005dbc20) Stream removed, broadcasting: 5\n" Jan 28 00:04:05.695: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:04:05.695: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:04:05.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:04:06.084: INFO: stderr: "I0128 00:04:05.912200 1293 log.go:172] (0xc0009e6000) (0xc0008c6000) Create stream\nI0128 00:04:05.912493 1293 log.go:172] (0xc0009e6000) (0xc0008c6000) Stream added, broadcasting: 1\nI0128 00:04:05.918215 1293 log.go:172] (0xc0009e6000) Reply frame received for 1\nI0128 00:04:05.918390 1293 log.go:172] (0xc0009e6000) (0xc0009b0280) Create stream\nI0128 00:04:05.918436 1293 log.go:172] (0xc0009e6000) (0xc0009b0280) Stream added, broadcasting: 3\nI0128 00:04:05.919959 1293 log.go:172] (0xc0009e6000) Reply frame received for 3\nI0128 00:04:05.920002 1293 log.go:172] (0xc0009e6000) (0xc0007ae1e0) Create stream\nI0128 00:04:05.920022 1293 log.go:172] (0xc0009e6000) (0xc0007ae1e0) Stream added, broadcasting: 5\nI0128 00:04:05.922737 1293 log.go:172] (0xc0009e6000) Reply frame received for 5\nI0128 00:04:05.992643 1293 log.go:172] (0xc0009e6000) Data frame received for 3\nI0128 00:04:05.992699 1293 log.go:172] (0xc0009b0280) (3) Data frame handling\nI0128 00:04:05.992714 1293 log.go:172] (0xc0009b0280) (3) Data frame sent\nI0128 00:04:05.992752 1293 log.go:172] (0xc0009e6000) Data frame received for 5\nI0128 00:04:05.992758 1293 log.go:172] (0xc0007ae1e0) (5) Data frame handling\nI0128 00:04:05.992764 1293 log.go:172] (0xc0007ae1e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:04:06.072320 1293 log.go:172] (0xc0009e6000) Data frame received for 1\nI0128 00:04:06.072489 1293 log.go:172] (0xc0009e6000) (0xc0009b0280) Stream removed, broadcasting: 3\nI0128 00:04:06.072631 1293 log.go:172] (0xc0009e6000) (0xc0007ae1e0) Stream removed, broadcasting: 5\nI0128 00:04:06.072674 1293 log.go:172] (0xc0008c6000) (1) Data frame handling\nI0128 00:04:06.072704 1293 log.go:172] (0xc0008c6000) (1) Data frame sent\nI0128 00:04:06.072715 1293 log.go:172] (0xc0009e6000) (0xc0008c6000) Stream removed, broadcasting: 1\nI0128 00:04:06.073890 1293 log.go:172] (0xc0009e6000) Go away received\nI0128 00:04:06.074196 1293 log.go:172] (0xc0009e6000) (0xc0008c6000) Stream removed, broadcasting: 1\nI0128 00:04:06.074223 1293 log.go:172] (0xc0009e6000) (0xc0009b0280) Stream removed, broadcasting: 3\nI0128 00:04:06.074234 1293 log.go:172] (0xc0009e6000) (0xc0007ae1e0) Stream removed, broadcasting: 5\n" Jan 28 00:04:06.084: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:04:06.084: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:04:06.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8230 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 28 00:04:06.433: INFO: stderr: "I0128 00:04:06.253750 1313 log.go:172] (0xc000accbb0) (0xc000647e00) Create stream\nI0128 00:04:06.254042 1313 log.go:172] (0xc000accbb0) (0xc000647e00) Stream added, broadcasting: 1\nI0128 00:04:06.257963 1313 log.go:172] (0xc000accbb0) Reply frame received for 1\nI0128 00:04:06.258055 1313 log.go:172] (0xc000accbb0) (0xc000647ea0) Create stream\nI0128 00:04:06.258092 1313 log.go:172] (0xc000accbb0) (0xc000647ea0) Stream added, broadcasting: 3\nI0128 00:04:06.259374 1313 log.go:172] (0xc000accbb0) Reply frame received for 3\nI0128 00:04:06.259400 1313 log.go:172] (0xc000accbb0) (0xc000b20f00) Create stream\nI0128 00:04:06.259409 1313 log.go:172] (0xc000accbb0) (0xc000b20f00) Stream added, broadcasting: 5\nI0128 00:04:06.262766 1313 log.go:172] (0xc000accbb0) Reply frame received for 5\nI0128 00:04:06.349319 1313 log.go:172] (0xc000accbb0) Data frame received for 5\nI0128 00:04:06.349936 1313 log.go:172] (0xc000b20f00) (5) Data frame handling\nI0128 00:04:06.350155 1313 log.go:172] (0xc000b20f00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:04:06.350328 1313 log.go:172] (0xc000accbb0) Data frame received for 3\nI0128 00:04:06.350692 1313 log.go:172] (0xc000647ea0) (3) Data frame handling\nI0128 00:04:06.350799 1313 log.go:172] (0xc000647ea0) (3) Data frame sent\nI0128 00:04:06.420470 1313 log.go:172] (0xc000accbb0) (0xc000b20f00) Stream removed, broadcasting: 5\nI0128 00:04:06.420679 1313 log.go:172] (0xc000accbb0) Data frame received for 1\nI0128 00:04:06.420721 1313 log.go:172] (0xc000accbb0) (0xc000647ea0) Stream removed, broadcasting: 3\nI0128 00:04:06.420760 1313 log.go:172] (0xc000647e00) (1) Data frame handling\nI0128 00:04:06.420778 1313 log.go:172] (0xc000647e00) (1) Data frame sent\nI0128 00:04:06.420792 1313 log.go:172] (0xc000accbb0) (0xc000647e00) Stream removed, broadcasting: 1\nI0128 00:04:06.420812 1313 log.go:172] (0xc000accbb0) Go away received\nI0128 00:04:06.421608 1313 log.go:172] (0xc000accbb0) (0xc000647e00) Stream removed, broadcasting: 1\nI0128 00:04:06.421620 1313 log.go:172] (0xc000accbb0) (0xc000647ea0) Stream removed, broadcasting: 3\nI0128 00:04:06.421625 1313 log.go:172] (0xc000accbb0) (0xc000b20f00) Stream removed, broadcasting: 5\n" Jan 28 00:04:06.433: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 28 00:04:06.433: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 28 00:04:06.433: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Jan 28 00:04:36.465: INFO: Deleting all statefulset in ns statefulset-8230 Jan 28 00:04:36.469: INFO: Scaling statefulset ss to 0 Jan 28 00:04:36.481: INFO: Waiting for statefulset status.replicas updated to 0 Jan 28 00:04:36.483: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 28 00:04:36.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8230" for this suite. • [SLOW TEST:106.684 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":53,"skipped":904,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 28 00:04:36.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 28 00:04:36.684: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
apt/
... (200; 20.76869ms)
Jan 28 00:04:36.689: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.586225ms)
Jan 28 00:04:36.692: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.510319ms)
Jan 28 00:04:36.697: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.359727ms)
Jan 28 00:04:36.701: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.922102ms)
Jan 28 00:04:36.705: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.261382ms)
Jan 28 00:04:36.710: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.674332ms)
Jan 28 00:04:36.713: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.907174ms)
Jan 28 00:04:36.717: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.377076ms)
Jan 28 00:04:36.720: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.227672ms)
Jan 28 00:04:36.723: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 2.980317ms)
Jan 28 00:04:36.727: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.487328ms)
Jan 28 00:04:36.731: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.289606ms)
Jan 28 00:04:36.735: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.95002ms)
Jan 28 00:04:36.739: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.106547ms)
Jan 28 00:04:36.743: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.663879ms)
Jan 28 00:04:36.760: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 17.043988ms)
Jan 28 00:04:36.780: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 19.816528ms)
Jan 28 00:04:36.788: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 7.792377ms)
Jan 28 00:04:36.791: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.104529ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:04:36.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-170" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":54,"skipped":917,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:04:36.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-fcnj
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 00:04:36.923: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fcnj" in namespace "subpath-2854" to be "success or failure"
Jan 28 00:04:36.943: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Pending", Reason="", readiness=false. Elapsed: 19.811013ms
Jan 28 00:04:38.950: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027178396s
Jan 28 00:04:40.956: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032948546s
Jan 28 00:04:42.962: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039444725s
Jan 28 00:04:44.968: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 8.044961594s
Jan 28 00:04:46.976: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 10.053112794s
Jan 28 00:04:48.983: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 12.060034163s
Jan 28 00:04:51.001: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 14.07799389s
Jan 28 00:04:53.008: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 16.085511566s
Jan 28 00:04:55.013: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 18.090065782s
Jan 28 00:04:57.024: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 20.101246296s
Jan 28 00:04:59.039: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 22.116177712s
Jan 28 00:05:01.046: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 24.122740073s
Jan 28 00:05:03.055: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Running", Reason="", readiness=true. Elapsed: 26.132324675s
Jan 28 00:05:05.063: INFO: Pod "pod-subpath-test-projected-fcnj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.140034451s
STEP: Saw pod success
Jan 28 00:05:05.063: INFO: Pod "pod-subpath-test-projected-fcnj" satisfied condition "success or failure"
Jan 28 00:05:05.070: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-fcnj container test-container-subpath-projected-fcnj: 
STEP: delete the pod
Jan 28 00:05:05.294: INFO: Waiting for pod pod-subpath-test-projected-fcnj to disappear
Jan 28 00:05:05.330: INFO: Pod pod-subpath-test-projected-fcnj no longer exists
STEP: Deleting pod pod-subpath-test-projected-fcnj
Jan 28 00:05:05.330: INFO: Deleting pod "pod-subpath-test-projected-fcnj" in namespace "subpath-2854"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:05:05.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2854" for this suite.

• [SLOW TEST:28.566 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":55,"skipped":935,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:05:05.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:05:05.606: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595" in namespace "security-context-test-167" to be "success or failure"
Jan 28 00:05:05.642: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": Phase="Pending", Reason="", readiness=false. Elapsed: 35.96061ms
Jan 28 00:05:07.650: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043386102s
Jan 28 00:05:09.658: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051451121s
Jan 28 00:05:11.666: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05911326s
Jan 28 00:05:13.674: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067161912s
Jan 28 00:05:13.674: INFO: Pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595" satisfied condition "success or failure"
Jan 28 00:05:13.702: INFO: Got logs for pod "busybox-privileged-false-00ce526a-b344-43a5-b07b-a9b5f8089595": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:05:13.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-167" for this suite.

• [SLOW TEST:8.358 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":56,"skipped":943,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:05:13.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 28 00:05:22.456: INFO: Successfully updated pod "annotationupdate2d00c3b9-5b53-4480-b0b8-50e278a0f9c4"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:05:24.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7398" for this suite.

• [SLOW TEST:10.837 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":57,"skipped":959,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:05:24.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 28 00:05:24.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9169'
Jan 28 00:05:25.087: INFO: stderr: ""
Jan 28 00:05:25.087: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 00:05:25.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:25.246: INFO: stderr: ""
Jan 28 00:05:25.246: INFO: stdout: "update-demo-nautilus-db8g5 update-demo-nautilus-k5dlc "
Jan 28 00:05:25.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:25.358: INFO: stderr: ""
Jan 28 00:05:25.359: INFO: stdout: ""
Jan 28 00:05:25.359: INFO: update-demo-nautilus-db8g5 is created but not running
Jan 28 00:05:30.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:30.664: INFO: stderr: ""
Jan 28 00:05:30.664: INFO: stdout: "update-demo-nautilus-db8g5 update-demo-nautilus-k5dlc "
Jan 28 00:05:30.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:30.995: INFO: stderr: ""
Jan 28 00:05:30.995: INFO: stdout: ""
Jan 28 00:05:30.995: INFO: update-demo-nautilus-db8g5 is created but not running
Jan 28 00:05:35.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:36.220: INFO: stderr: ""
Jan 28 00:05:36.220: INFO: stdout: "update-demo-nautilus-db8g5 update-demo-nautilus-k5dlc "
Jan 28 00:05:36.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:36.341: INFO: stderr: ""
Jan 28 00:05:36.342: INFO: stdout: "true"
Jan 28 00:05:36.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:36.476: INFO: stderr: ""
Jan 28 00:05:36.476: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:05:36.477: INFO: validating pod update-demo-nautilus-db8g5
Jan 28 00:05:36.490: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:05:36.491: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:05:36.491: INFO: update-demo-nautilus-db8g5 is verified up and running
Jan 28 00:05:36.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5dlc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:36.655: INFO: stderr: ""
Jan 28 00:05:36.655: INFO: stdout: "true"
Jan 28 00:05:36.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k5dlc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:36.933: INFO: stderr: ""
Jan 28 00:05:36.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:05:36.933: INFO: validating pod update-demo-nautilus-k5dlc
Jan 28 00:05:36.961: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:05:36.961: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:05:36.961: INFO: update-demo-nautilus-k5dlc is verified up and running
STEP: scaling down the replication controller
Jan 28 00:05:36.964: INFO: scanned /root for discovery docs: 
Jan 28 00:05:36.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9169'
Jan 28 00:05:38.186: INFO: stderr: ""
Jan 28 00:05:38.187: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 00:05:38.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:38.378: INFO: stderr: ""
Jan 28 00:05:38.378: INFO: stdout: "update-demo-nautilus-db8g5 update-demo-nautilus-k5dlc "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 28 00:05:43.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:43.578: INFO: stderr: ""
Jan 28 00:05:43.578: INFO: stdout: "update-demo-nautilus-db8g5 "
Jan 28 00:05:43.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:43.750: INFO: stderr: ""
Jan 28 00:05:43.750: INFO: stdout: "true"
Jan 28 00:05:43.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:44.082: INFO: stderr: ""
Jan 28 00:05:44.082: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:05:44.083: INFO: validating pod update-demo-nautilus-db8g5
Jan 28 00:05:44.098: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:05:44.098: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:05:44.098: INFO: update-demo-nautilus-db8g5 is verified up and running
STEP: scaling up the replication controller
Jan 28 00:05:44.104: INFO: scanned /root for discovery docs: 
Jan 28 00:05:44.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9169'
Jan 28 00:05:45.298: INFO: stderr: ""
Jan 28 00:05:45.298: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 00:05:45.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:45.464: INFO: stderr: ""
Jan 28 00:05:45.464: INFO: stdout: "update-demo-nautilus-7kc4x update-demo-nautilus-db8g5 "
Jan 28 00:05:45.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kc4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:45.964: INFO: stderr: ""
Jan 28 00:05:45.964: INFO: stdout: ""
Jan 28 00:05:45.964: INFO: update-demo-nautilus-7kc4x is created but not running
Jan 28 00:05:50.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9169'
Jan 28 00:05:51.127: INFO: stderr: ""
Jan 28 00:05:51.127: INFO: stdout: "update-demo-nautilus-7kc4x update-demo-nautilus-db8g5 "
Jan 28 00:05:51.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kc4x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:51.337: INFO: stderr: ""
Jan 28 00:05:51.337: INFO: stdout: "true"
Jan 28 00:05:51.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7kc4x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:51.474: INFO: stderr: ""
Jan 28 00:05:51.474: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:05:51.474: INFO: validating pod update-demo-nautilus-7kc4x
Jan 28 00:05:51.478: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:05:51.478: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:05:51.478: INFO: update-demo-nautilus-7kc4x is verified up and running
Jan 28 00:05:51.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:51.605: INFO: stderr: ""
Jan 28 00:05:51.605: INFO: stdout: "true"
Jan 28 00:05:51.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-db8g5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9169'
Jan 28 00:05:51.708: INFO: stderr: ""
Jan 28 00:05:51.708: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:05:51.708: INFO: validating pod update-demo-nautilus-db8g5
Jan 28 00:05:51.713: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:05:51.713: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:05:51.713: INFO: update-demo-nautilus-db8g5 is verified up and running
STEP: using delete to clean up resources
Jan 28 00:05:51.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9169'
Jan 28 00:05:51.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 00:05:51.831: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 28 00:05:51.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9169'
Jan 28 00:05:51.974: INFO: stderr: "No resources found in kubectl-9169 namespace.\n"
Jan 28 00:05:51.975: INFO: stdout: ""
Jan 28 00:05:51.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9169 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 00:05:52.220: INFO: stderr: ""
Jan 28 00:05:52.220: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:05:52.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9169" for this suite.

• [SLOW TEST:27.688 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":58,"skipped":1023,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:05:52.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 28 00:05:52.364: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:06:08.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7039" for this suite.

• [SLOW TEST:15.982 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":59,"skipped":1045,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:06:08.238: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:06:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6151" for this suite.

• [SLOW TEST:8.461 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":60,"skipped":1048,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:06:16.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:06:16.872: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 00:06:19.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-712 create -f -'
Jan 28 00:06:21.374: INFO: stderr: ""
Jan 28 00:06:21.374: INFO: stdout: "e2e-test-crd-publish-openapi-631-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 28 00:06:21.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-712 delete e2e-test-crd-publish-openapi-631-crds test-cr'
Jan 28 00:06:21.640: INFO: stderr: ""
Jan 28 00:06:21.640: INFO: stdout: "e2e-test-crd-publish-openapi-631-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jan 28 00:06:21.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-712 apply -f -'
Jan 28 00:06:22.086: INFO: stderr: ""
Jan 28 00:06:22.086: INFO: stdout: "e2e-test-crd-publish-openapi-631-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jan 28 00:06:22.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-712 delete e2e-test-crd-publish-openapi-631-crds test-cr'
Jan 28 00:06:22.362: INFO: stderr: ""
Jan 28 00:06:22.362: INFO: stdout: "e2e-test-crd-publish-openapi-631-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jan 28 00:06:22.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-631-crds'
Jan 28 00:06:22.743: INFO: stderr: ""
Jan 28 00:06:22.743: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-631-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:06:26.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-712" for this suite.

• [SLOW TEST:9.498 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":61,"skipped":1097,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:06:26.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:06:27.277: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 28 00:06:29.302: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:06:31.317: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:06:33.312: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766787, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:06:36.361: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:06:36.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:06:38.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2901" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:12.613 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":62,"skipped":1124,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:06:38.816: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:06:39.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:06:41.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:06:43.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:06:45.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:06:47.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715766799, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:06:50.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:06:50.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8515" for this suite.
STEP: Destroying namespace "webhook-8515-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.389 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":63,"skipped":1128,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:06:51.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:06:51.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9" in namespace "projected-8942" to be "success or failure"
Jan 28 00:06:51.321: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 25.810347ms
Jan 28 00:06:53.327: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031816814s
Jan 28 00:06:55.383: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087409037s
Jan 28 00:06:57.394: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098100025s
Jan 28 00:06:59.908: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.612763175s
STEP: Saw pod success
Jan 28 00:06:59.908: INFO: Pod "downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9" satisfied condition "success or failure"
Jan 28 00:06:59.926: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9 container client-container: 
STEP: delete the pod
Jan 28 00:07:00.002: INFO: Waiting for pod downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9 to disappear
Jan 28 00:07:00.135: INFO: Pod downwardapi-volume-2f0bbc57-7fd8-4ddf-96f6-4b012a21e0e9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:07:00.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8942" for this suite.

• [SLOW TEST:8.947 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":1137,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:07:00.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-91ce71d9-3403-436d-b125-cd63818f0202
STEP: Creating a pod to test consume configMaps
Jan 28 00:07:00.481: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce" in namespace "projected-8358" to be "success or failure"
Jan 28 00:07:00.496: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce": Phase="Pending", Reason="", readiness=false. Elapsed: 14.455889ms
Jan 28 00:07:02.506: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024188452s
Jan 28 00:07:04.571: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089851485s
Jan 28 00:07:06.590: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108331926s
Jan 28 00:07:08.598: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116358021s
STEP: Saw pod success
Jan 28 00:07:08.598: INFO: Pod "pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce" satisfied condition "success or failure"
Jan 28 00:07:08.602: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 00:07:09.228: INFO: Waiting for pod pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce to disappear
Jan 28 00:07:09.248: INFO: Pod pod-projected-configmaps-ff153c83-dbc0-482f-bbb1-39acdc14a6ce no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:07:09.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8358" for this suite.

• [SLOW TEST:9.113 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":65,"skipped":1161,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:07:09.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:07:09.558: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 28 00:07:09.571: INFO: Number of nodes with available pods: 0
Jan 28 00:07:09.571: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 28 00:07:09.741: INFO: Number of nodes with available pods: 0
Jan 28 00:07:09.742: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:10.747: INFO: Number of nodes with available pods: 0
Jan 28 00:07:10.747: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:11.752: INFO: Number of nodes with available pods: 0
Jan 28 00:07:11.752: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:12.746: INFO: Number of nodes with available pods: 0
Jan 28 00:07:12.746: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:13.748: INFO: Number of nodes with available pods: 0
Jan 28 00:07:13.748: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:14.748: INFO: Number of nodes with available pods: 0
Jan 28 00:07:14.748: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:15.748: INFO: Number of nodes with available pods: 0
Jan 28 00:07:15.748: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:16.748: INFO: Number of nodes with available pods: 1
Jan 28 00:07:16.748: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 28 00:07:16.800: INFO: Number of nodes with available pods: 1
Jan 28 00:07:16.800: INFO: Number of running nodes: 0, number of available pods: 1
Jan 28 00:07:17.807: INFO: Number of nodes with available pods: 0
Jan 28 00:07:17.808: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 28 00:07:17.826: INFO: Number of nodes with available pods: 0
Jan 28 00:07:17.826: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:18.832: INFO: Number of nodes with available pods: 0
Jan 28 00:07:18.832: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:19.832: INFO: Number of nodes with available pods: 0
Jan 28 00:07:19.832: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:20.833: INFO: Number of nodes with available pods: 0
Jan 28 00:07:20.833: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:21.832: INFO: Number of nodes with available pods: 0
Jan 28 00:07:21.832: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:22.831: INFO: Number of nodes with available pods: 0
Jan 28 00:07:22.831: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:23.833: INFO: Number of nodes with available pods: 0
Jan 28 00:07:23.833: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:24.830: INFO: Number of nodes with available pods: 0
Jan 28 00:07:24.830: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:25.836: INFO: Number of nodes with available pods: 0
Jan 28 00:07:25.836: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:26.832: INFO: Number of nodes with available pods: 0
Jan 28 00:07:26.832: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:27.832: INFO: Number of nodes with available pods: 0
Jan 28 00:07:27.832: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:07:28.829: INFO: Number of nodes with available pods: 1
Jan 28 00:07:28.829: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7367, will wait for the garbage collector to delete the pods
Jan 28 00:07:28.900: INFO: Deleting DaemonSet.extensions daemon-set took: 11.391455ms
Jan 28 00:07:29.200: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.540488ms
Jan 28 00:07:42.412: INFO: Number of nodes with available pods: 0
Jan 28 00:07:42.412: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 00:07:42.418: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7367/daemonsets","resourceVersion":"4771965"},"items":null}

Jan 28 00:07:42.422: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7367/pods","resourceVersion":"4771965"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:07:42.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7367" for this suite.

• [SLOW TEST:33.228 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":66,"skipped":1170,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:07:42.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:07:42.607: INFO: Creating deployment "webserver-deployment"
Jan 28 00:07:42.625: INFO: Waiting for observed generation 1
Jan 28 00:07:44.674: INFO: Waiting for all required pods to come up
Jan 28 00:07:45.255: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 28 00:08:07.378: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 28 00:08:07.491: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 28 00:08:07.498: INFO: Updating deployment webserver-deployment
Jan 28 00:08:07.498: INFO: Waiting for observed generation 2
Jan 28 00:08:09.875: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 28 00:08:09.885: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 28 00:08:10.417: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 28 00:08:10.496: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 28 00:08:10.496: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 28 00:08:10.613: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 28 00:08:10.698: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 28 00:08:10.698: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 28 00:08:10.710: INFO: Updating deployment webserver-deployment
Jan 28 00:08:10.710: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 28 00:08:10.913: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 28 00:08:12.481: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 28 00:08:19.032: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-2145 /apis/apps/v1/namespaces/deployment-2145/deployments/webserver-deployment b1822ca6-6acd-40b1-9ed6-d3566efb2e6c 4772288 3 2020-01-28 00:07:42 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034d2688  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-28 00:08:10 +0000 UTC,LastTransitionTime:2020-01-28 00:08:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-28 00:08:13 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 28 00:08:19.974: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-2145 /apis/apps/v1/namespaces/deployment-2145/replicasets/webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 4772267 3 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b1822ca6-6acd-40b1-9ed6-d3566efb2e6c 0xc0034f1e57 0xc0034f1e58}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034f1ee8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:08:19.974: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 28 00:08:19.974: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-2145 /apis/apps/v1/namespaces/deployment-2145/replicasets/webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 4772284 3 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b1822ca6-6acd-40b1-9ed6-d3566efb2e6c 0xc0034f1cc7 0xc0034f1cc8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034f1db8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:08:21.485: INFO: Pod "webserver-deployment-595b5b9587-86czj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-86czj webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-86czj 6d555ae4-9b1e-4d24-b9ee-d56b5de9f6bd 4772255 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034aa6c7 0xc0034aa6c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.485: INFO: Pod "webserver-deployment-595b5b9587-8fr8x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8fr8x webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-8fr8x d931da49-38e4-4b32-85eb-9558f5809ca3 4772232 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034aa917 0xc0034aa918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.486: INFO: Pod "webserver-deployment-595b5b9587-8rpwz" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-8rpwz webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-8rpwz 59b899f5-d23d-45e8-a48e-8ba934e38744 4772245 0 2020-01-28 00:08:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034aab37 0xc0034aab38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.486: INFO: Pod "webserver-deployment-595b5b9587-dhs46" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-dhs46 webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-dhs46 0789d5a7-2165-4917-a484-1c2bb07765f8 4772261 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034aadc7 0xc0034aadc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.486: INFO: Pod "webserver-deployment-595b5b9587-fx7x4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fx7x4 webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-fx7x4 e08f2cf3-95a9-4e08-a9ae-60d0d8cdb8b3 4772289 0 2020-01-28 00:08:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034aaf77 0xc0034aaf78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 00:08:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.487: INFO: Pod "webserver-deployment-595b5b9587-h4rpc" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h4rpc webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-h4rpc e5a1774c-a0f2-4822-9ff6-e7067a0293b5 4772259 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034ab1d7 0xc0034ab1d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.487: INFO: Pod "webserver-deployment-595b5b9587-jlgfp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-jlgfp webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-jlgfp 65002210-b7fa-4e39-9597-2f39c42fa15d 4772285 0 2020-01-28 00:08:10 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034ab367 0xc0034ab368}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.488: INFO: Pod "webserver-deployment-595b5b9587-mlbh6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mlbh6 webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-mlbh6 1c2d7669-b907-44f3-9026-02be3899a7f7 4772293 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034ab617 0xc0034ab618}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 00:08:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.488: INFO: Pod "webserver-deployment-595b5b9587-mv95w" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mv95w webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-mv95w 1929c6c4-6829-4e49-96a5-6855acf53ae6 4772234 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034ab7e7 0xc0034ab7e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.489: INFO: Pod "webserver-deployment-595b5b9587-np4xj" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-np4xj webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-np4xj 48a094a1-5620-4c1a-b060-6c899197035b 4772140 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034ab967 0xc0034ab968}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-28 00:07:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://805c04e48bce1089c737e9defc5d4bf9d674e3aa0523cd702b552b31e32767db,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.490: INFO: Pod "webserver-deployment-595b5b9587-qb8bp" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qb8bp webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-qb8bp afe0d627-02f0-4f16-83f8-b8bbcaeff2b6 4772260 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034abbf0 0xc0034abbf1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.490: INFO: Pod "webserver-deployment-595b5b9587-rb87q" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rb87q webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-rb87q 2d9fa9bb-fe65-41ce-bf87-3df5bedb68e5 4772131 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0034abd87 0xc0034abd88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.6,StartTime:2020-01-28 00:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9191fd7a352a2ceba6e921796dde17ee836626ce4082bf5ba077906015ae5849,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.491: INFO: Pod "webserver-deployment-595b5b9587-sdxsf" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-sdxsf webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-sdxsf 9371885c-4034-41e9-b6fd-bf13669deff2 4772137 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a40c0 0xc0028a40c1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-28 00:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://61b932d0ae71f4e674db7bc2755e1e7716e4c530a48484746d8cfa3d156d02a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.491: INFO: Pod "webserver-deployment-595b5b9587-srfhl" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-srfhl webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-srfhl 00213564-688d-4665-a8d0-bbf3ab71d486 4772116 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a4450 0xc0028a4451}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-28 00:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f729c87590fa56d038833b538893c294b166edaeccb4ce21b119da190e486b50,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.492: INFO: Pod "webserver-deployment-595b5b9587-tt4gk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tt4gk webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-tt4gk 950008f3-de97-4386-a387-84107e5b05d1 4772111 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a4890 0xc0028a4891}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-28 00:07:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c1f30cc4b8b9fba01af08b24713c9aa75b30911b8b7e4ad2fb7c89c262a6fc20,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.492: INFO: Pod "webserver-deployment-595b5b9587-twtwc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-twtwc webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-twtwc 85a11dae-cae1-4507-a78f-759dfc04c9c4 4772119 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a4cd0 0xc0028a4cd1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-28 00:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://5d667fae081b7c79389d3e1908edde1ada04f95d8d135e4eaa52a8c6d78d257a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.493: INFO: Pod "webserver-deployment-595b5b9587-vs9vk" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vs9vk webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-vs9vk c8fc8854-dcd9-42f9-8a83-75b0eadc073f 4772258 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a50f0 0xc0028a50f1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.493: INFO: Pod "webserver-deployment-595b5b9587-w5xs6" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-w5xs6 webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-w5xs6 77b91c9a-f00e-40de-b439-a977d62132f8 4772092 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a5267 0xc0028a5268}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-28 00:07:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://93e2b230c3bbdbd55a24c841e35f81e483864fda355e9b51222bf4ea1e3f07e8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.494: INFO: Pod "webserver-deployment-595b5b9587-wzqdq" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wzqdq webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-wzqdq d6202f50-4ab6-4c33-a4bc-bbeb6fe79ced 4772297 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a5780 0xc0028a5781}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.494: INFO: Pod "webserver-deployment-595b5b9587-zk9md" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zk9md webserver-deployment-595b5b9587- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-595b5b9587-zk9md 16377b33-98ed-4826-879a-80e4d1ed8d5b 4772106 0 2020-01-28 00:07:42 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e8205201-ed3b-468b-a927-1b40d2fa2324 0xc0028a5bd7 0xc0028a5bd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:07:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-28 00:07:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:08:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://41e3809ddefb822a6ace8d20be584c4a880723016dd5b21107e28f97de427c17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.494: INFO: Pod "webserver-deployment-c7997dcc8-46dht" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-46dht webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-46dht 2127e1ac-92ee-4ac0-a607-8490fa2f1437 4772291 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc2050 0xc003bc2051}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.495: INFO: Pod "webserver-deployment-c7997dcc8-4w2ln" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4w2ln webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-4w2ln 8d03cc39-49a5-4137-8bc1-ba9c66bc1a12 4772161 0 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc2287 0xc003bc2288}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.495: INFO: Pod "webserver-deployment-c7997dcc8-887pf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-887pf webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-887pf e883863d-4a20-42ee-9c27-c6f476b7ab3e 4772256 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc24b7 0xc003bc24b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.495: INFO: Pod "webserver-deployment-c7997dcc8-bczsg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bczsg webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-bczsg cf823fa4-56be-4172-a913-4ca3c57290d9 4772252 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc26c7 0xc003bc26c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.496: INFO: Pod "webserver-deployment-c7997dcc8-dc59r" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dc59r webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-dc59r 192054c2-b964-4862-be08-ceeb175dc29e 4772254 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc28e7 0xc003bc28e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.496: INFO: Pod "webserver-deployment-c7997dcc8-dzfxq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dzfxq webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-dzfxq 201dd300-c2d7-4297-8a93-e1e24fdf98c1 4772231 0 2020-01-28 00:08:11 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc2aa7 0xc003bc2aa8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.496: INFO: Pod "webserver-deployment-c7997dcc8-jf2p9" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jf2p9 webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-jf2p9 8b7b8ae3-03fe-4b0e-a2e9-cdebc890f88e 4772198 0 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc2cf7 0xc003bc2cf8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.497: INFO: Pod "webserver-deployment-c7997dcc8-mbcvg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mbcvg webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-mbcvg 8e4e4d45-70db-432f-a0e0-75e15d1bd003 4772265 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3087 0xc003bc3088}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.498: INFO: Pod "webserver-deployment-c7997dcc8-pdn62" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pdn62 webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-pdn62 86d85353-a6b7-40ac-b7d3-bec08264940e 4772179 0 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3277 0xc003bc3278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.498: INFO: Pod "webserver-deployment-c7997dcc8-rzz8n" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rzz8n webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-rzz8n 3047bd9f-cdd5-4a0d-83b6-1ceea4d749b1 4772280 0 2020-01-28 00:08:10 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3577 0xc003bc3578}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 00:08:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.499: INFO: Pod "webserver-deployment-c7997dcc8-tfvmx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tfvmx webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-tfvmx 8d1a9cd1-574e-4ad5-b99c-4c2ffaa5676b 4772170 0 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3877 0xc003bc3878}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 00:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.499: INFO: Pod "webserver-deployment-c7997dcc8-vw4fc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vw4fc webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-vw4fc e9215e95-7a67-4ad7-8875-617f8b443c9a 4772193 0 2020-01-28 00:08:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3ab7 0xc003bc3ab8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-28 00:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:08:21.500: INFO: Pod "webserver-deployment-c7997dcc8-wftfg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wftfg webserver-deployment-c7997dcc8- deployment-2145 /api/v1/namespaces/deployment-2145/pods/webserver-deployment-c7997dcc8-wftfg b1ee7f93-e416-4a39-aaa6-45f73ce4c6c6 4772253 0 2020-01-28 00:08:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7a89b48e-df51-480c-b3da-faed64179216 0xc003bc3d17 0xc003bc3d18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xthrc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xthrc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xthrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:08:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:08:21.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2145" for this suite.

• [SLOW TEST:41.563 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":67,"skipped":1171,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:08:24.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-677k
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 00:08:29.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-677k" in namespace "subpath-9409" to be "success or failure"
Jan 28 00:08:29.829: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 270.914605ms
Jan 28 00:08:32.100: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 2.542115669s
Jan 28 00:08:34.265: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 4.707469797s
Jan 28 00:08:39.074: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 9.515689272s
Jan 28 00:08:41.964: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 12.406327225s
Jan 28 00:08:44.449: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 14.891423545s
Jan 28 00:08:46.852: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 17.29447661s
Jan 28 00:08:50.289: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 20.731168029s
Jan 28 00:08:54.139: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 24.580677068s
Jan 28 00:08:56.653: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 27.094954026s
Jan 28 00:08:59.223: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 29.665482321s
Jan 28 00:09:01.407: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 31.849096983s
Jan 28 00:09:03.664: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 34.10634303s
Jan 28 00:09:06.092: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 36.534378623s
Jan 28 00:09:08.101: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 38.542831225s
Jan 28 00:09:10.238: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 40.679954733s
Jan 28 00:09:12.295: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 42.737507164s
Jan 28 00:09:14.434: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 44.876593453s
Jan 28 00:09:16.456: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 46.897626058s
Jan 28 00:09:18.465: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 48.906992193s
Jan 28 00:09:21.725: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Pending", Reason="", readiness=false. Elapsed: 52.16736658s
Jan 28 00:09:23.733: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 54.175025869s
Jan 28 00:09:25.740: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 56.181939711s
Jan 28 00:09:27.744: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 58.186506761s
Jan 28 00:09:29.756: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m0.198100562s
Jan 28 00:09:31.763: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m2.204890126s
Jan 28 00:09:33.778: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m4.219632039s
Jan 28 00:09:35.783: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m6.225516262s
Jan 28 00:09:37.788: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.229845585s
Jan 28 00:09:39.794: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Running", Reason="", readiness=true. Elapsed: 1m10.235664618s
Jan 28 00:09:41.802: INFO: Pod "pod-subpath-test-configmap-677k": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m12.244039671s
STEP: Saw pod success
Jan 28 00:09:41.802: INFO: Pod "pod-subpath-test-configmap-677k" satisfied condition "success or failure"
Jan 28 00:09:41.810: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-677k container test-container-subpath-configmap-677k: 
STEP: delete the pod
Jan 28 00:09:41.919: INFO: Waiting for pod pod-subpath-test-configmap-677k to disappear
Jan 28 00:09:41.924: INFO: Pod pod-subpath-test-configmap-677k no longer exists
STEP: Deleting pod pod-subpath-test-configmap-677k
Jan 28 00:09:41.924: INFO: Deleting pod "pod-subpath-test-configmap-677k" in namespace "subpath-9409"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:09:41.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9409" for this suite.

• [SLOW TEST:77.878 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":68,"skipped":1178,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:09:41.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 28 00:09:42.004: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:09:51.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2895" for this suite.

• [SLOW TEST:10.015 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":69,"skipped":1213,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:09:51.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:10:27.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8718" for this suite.
STEP: Destroying namespace "nsdeletetest-7519" for this suite.
Jan 28 00:10:27.479: INFO: Namespace nsdeletetest-7519 was already deleted
STEP: Destroying namespace "nsdeletetest-3598" for this suite.

• [SLOW TEST:35.528 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":70,"skipped":1229,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:10:27.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 28 00:10:27.595: INFO: Waiting up to 5m0s for pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672" in namespace "emptydir-725" to be "success or failure"
Jan 28 00:10:27.602: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Pending", Reason="", readiness=false. Elapsed: 6.868523ms
Jan 28 00:10:29.777: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181861756s
Jan 28 00:10:31.786: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190883989s
Jan 28 00:10:33.797: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201273507s
Jan 28 00:10:35.813: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217166513s
Jan 28 00:10:37.819: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.223283098s
STEP: Saw pod success
Jan 28 00:10:37.819: INFO: Pod "pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672" satisfied condition "success or failure"
Jan 28 00:10:37.823: INFO: Trying to get logs from node jerma-node pod pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672 container test-container: 
STEP: delete the pod
Jan 28 00:10:37.858: INFO: Waiting for pod pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672 to disappear
Jan 28 00:10:37.877: INFO: Pod pod-582e563f-a6eb-4fb4-b41e-2b7f7f6c6672 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:10:37.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-725" for this suite.

• [SLOW TEST:10.424 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":71,"skipped":1229,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:10:37.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 00:10:37.969: INFO: Waiting up to 5m0s for pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041" in namespace "emptydir-1192" to be "success or failure"
Jan 28 00:10:37.975: INFO: Pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041": Phase="Pending", Reason="", readiness=false. Elapsed: 5.823336ms
Jan 28 00:10:39.982: INFO: Pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012726189s
Jan 28 00:10:41.988: INFO: Pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01878841s
Jan 28 00:10:43.998: INFO: Pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028999736s
STEP: Saw pod success
Jan 28 00:10:43.998: INFO: Pod "pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041" satisfied condition "success or failure"
Jan 28 00:10:44.002: INFO: Trying to get logs from node jerma-node pod pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041 container test-container: 
STEP: delete the pod
Jan 28 00:10:44.060: INFO: Waiting for pod pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041 to disappear
Jan 28 00:10:44.100: INFO: Pod pod-cbf4f5e8-f1ea-4fd6-b5b6-bcc2f6022041 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:10:44.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1192" for this suite.

• [SLOW TEST:6.208 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":72,"skipped":1229,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:10:44.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should find a service from listing all namespaces [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching services
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:10:44.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2529" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":73,"skipped":1235,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:10:44.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 28 00:10:44.395: INFO: Waiting up to 5m0s for pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca" in namespace "emptydir-8044" to be "success or failure"
Jan 28 00:10:44.456: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 60.843739ms
Jan 28 00:10:46.465: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068885782s
Jan 28 00:10:48.472: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076245571s
Jan 28 00:10:50.480: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084086142s
Jan 28 00:10:52.516: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120738028s
STEP: Saw pod success
Jan 28 00:10:52.517: INFO: Pod "pod-09587add-ff35-4d17-a1dc-fe248245a0ca" satisfied condition "success or failure"
Jan 28 00:10:52.523: INFO: Trying to get logs from node jerma-node pod pod-09587add-ff35-4d17-a1dc-fe248245a0ca container test-container: 
STEP: delete the pod
Jan 28 00:10:52.666: INFO: Waiting for pod pod-09587add-ff35-4d17-a1dc-fe248245a0ca to disappear
Jan 28 00:10:52.671: INFO: Pod pod-09587add-ff35-4d17-a1dc-fe248245a0ca no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:10:52.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8044" for this suite.

• [SLOW TEST:8.447 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1241,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:10:52.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-3161, will wait for the garbage collector to delete the pods
Jan 28 00:11:02.900: INFO: Deleting Job.batch foo took: 9.218434ms
Jan 28 00:11:03.201: INFO: Terminating Job.batch foo pods took: 300.45487ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:11:42.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3161" for this suite.

• [SLOW TEST:49.784 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":75,"skipped":1260,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:11:42.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9736
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 28 00:11:42.642: INFO: Found 0 stateful pods, waiting for 3
Jan 28 00:11:52.652: INFO: Found 2 stateful pods, waiting for 3
Jan 28 00:12:02.649: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:02.649: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:02.649: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 00:12:12.650: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:12.650: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:12.650: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 28 00:12:12.682: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 28 00:12:22.748: INFO: Updating stateful set ss2
Jan 28 00:12:22.780: INFO: Waiting for Pod statefulset-9736/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 28 00:12:33.364: INFO: Found 2 stateful pods, waiting for 3
Jan 28 00:12:43.374: INFO: Found 2 stateful pods, waiting for 3
Jan 28 00:12:53.371: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:53.371: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:12:53.371: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 28 00:12:53.415: INFO: Updating stateful set ss2
Jan 28 00:12:53.477: INFO: Waiting for Pod statefulset-9736/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:13:03.548: INFO: Updating stateful set ss2
Jan 28 00:13:03.608: INFO: Waiting for StatefulSet statefulset-9736/ss2 to complete update
Jan 28 00:13:03.608: INFO: Waiting for Pod statefulset-9736/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:13:13.626: INFO: Waiting for StatefulSet statefulset-9736/ss2 to complete update
Jan 28 00:13:13.626: INFO: Waiting for Pod statefulset-9736/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:13:23.636: INFO: Waiting for StatefulSet statefulset-9736/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 28 00:13:33.622: INFO: Deleting all statefulset in ns statefulset-9736
Jan 28 00:13:33.626: INFO: Scaling statefulset ss2 to 0
Jan 28 00:14:03.687: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 00:14:03.691: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:14:03.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9736" for this suite.

• [SLOW TEST:141.284 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":76,"skipped":1275,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:14:03.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:14:12.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6994" for this suite.

• [SLOW TEST:8.404 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":77,"skipped":1300,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:14:12.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Jan 28 00:14:12.849: INFO: created pod pod-service-account-defaultsa
Jan 28 00:14:12.850: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 28 00:14:13.043: INFO: created pod pod-service-account-mountsa
Jan 28 00:14:13.043: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 28 00:14:13.057: INFO: created pod pod-service-account-nomountsa
Jan 28 00:14:13.057: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 28 00:14:13.104: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 28 00:14:13.104: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 28 00:14:13.232: INFO: created pod pod-service-account-mountsa-mountspec
Jan 28 00:14:13.233: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 28 00:14:13.260: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 28 00:14:13.260: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 28 00:14:13.295: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 28 00:14:13.295: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 28 00:14:13.307: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 28 00:14:13.307: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 28 00:14:13.471: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 28 00:14:13.471: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:14:13.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-998" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":280,"completed":78,"skipped":1357,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:14:14.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 28 00:14:17.215: INFO: Waiting up to 5m0s for pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5" in namespace "emptydir-8053" to be "success or failure"
Jan 28 00:14:17.464: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 249.193946ms
Jan 28 00:14:19.878: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.66242077s
Jan 28 00:14:22.007: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.791389788s
Jan 28 00:14:24.123: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.907307849s
Jan 28 00:14:26.522: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.30699391s
Jan 28 00:14:30.261: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.04582192s
Jan 28 00:14:32.752: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 15.536869771s
Jan 28 00:14:34.852: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Pending", Reason="", readiness=false. Elapsed: 17.637139214s
Jan 28 00:14:36.859: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.644140011s
STEP: Saw pod success
Jan 28 00:14:36.859: INFO: Pod "pod-e798b22c-f82b-4312-9106-e84733a6f9d5" satisfied condition "success or failure"
Jan 28 00:14:36.863: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-e798b22c-f82b-4312-9106-e84733a6f9d5 container test-container: 
STEP: delete the pod
Jan 28 00:14:36.946: INFO: Waiting for pod pod-e798b22c-f82b-4312-9106-e84733a6f9d5 to disappear
Jan 28 00:14:37.114: INFO: Pod pod-e798b22c-f82b-4312-9106-e84733a6f9d5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:14:37.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8053" for this suite.

• [SLOW TEST:22.253 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":79,"skipped":1362,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:14:37.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 28 00:14:38.201: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 00:14:38.397: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 00:14:38.402: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 00:14:38.418: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.419: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:14:38.419: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 00:14:38.419: INFO: 	Container weave ready: true, restart count 1
Jan 28 00:14:38.419: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:14:38.419: INFO: busybox-readonly-fs60068b33-24ab-4615-bfb7-90a2f5d79700 from kubelet-test-6994 started at 2020-01-28 00:14:04 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.419: INFO: 	Container busybox-readonly-fs60068b33-24ab-4615-bfb7-90a2f5d79700 ready: true, restart count 0
Jan 28 00:14:38.419: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 00:14:38.434: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 00:14:38.434: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:14:38.434: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container weave ready: true, restart count 0
Jan 28 00:14:38.434: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:14:38.434: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 00:14:38.434: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 00:14:38.434: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.434: INFO: 	Container etcd ready: true, restart count 1
Jan 28 00:14:38.434: INFO: pod-service-account-mountsa-nomountspec from svcaccounts-998 started at 2020-01-28 00:14:14 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.435: INFO: 	Container token-test ready: false, restart count 0
Jan 28 00:14:38.435: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.435: INFO: 	Container coredns ready: true, restart count 0
Jan 28 00:14:38.435: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:14:38.435: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b2ae9b62-96bf-41ee-8b01-eba37c6eb400 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-b2ae9b62-96bf-41ee-8b01-eba37c6eb400 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b2ae9b62-96bf-41ee-8b01-eba37c6eb400
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:15:12.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9721" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:35.708 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":80,"skipped":1380,"failed":0}
S
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:15:12.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Jan 28 00:15:12.975: INFO: Waiting up to 5m0s for pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8" in namespace "containers-9955" to be "success or failure"
Jan 28 00:15:12.996: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.973509ms
Jan 28 00:15:15.002: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02644301s
Jan 28 00:15:17.011: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036045664s
Jan 28 00:15:19.015: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039462056s
Jan 28 00:15:21.021: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045281679s
STEP: Saw pod success
Jan 28 00:15:21.021: INFO: Pod "client-containers-5b082bac-f90a-4025-a334-92f239732ac8" satisfied condition "success or failure"
Jan 28 00:15:21.025: INFO: Trying to get logs from node jerma-node pod client-containers-5b082bac-f90a-4025-a334-92f239732ac8 container test-container: 
STEP: delete the pod
Jan 28 00:15:21.155: INFO: Waiting for pod client-containers-5b082bac-f90a-4025-a334-92f239732ac8 to disappear
Jan 28 00:15:21.165: INFO: Pod client-containers-5b082bac-f90a-4025-a334-92f239732ac8 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:15:21.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9955" for this suite.

• [SLOW TEST:8.339 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":81,"skipped":1381,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:15:21.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 28 00:15:21.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7058 /api/v1/namespaces/watch-7058/configmaps/e2e-watch-test-resource-version 7c346eef-969b-4315-8c10-02b67abb3770 4774171 0 2020-01-28 00:15:21 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:15:21.392: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-7058 /api/v1/namespaces/watch-7058/configmaps/e2e-watch-test-resource-version 7c346eef-969b-4315-8c10-02b67abb3770 4774172 0 2020-01-28 00:15:21 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:15:21.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7058" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":82,"skipped":1434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:15:21.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Jan 28 00:15:21.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5029'
Jan 28 00:15:22.213: INFO: stderr: ""
Jan 28 00:15:22.213: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 00:15:22.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029'
Jan 28 00:15:22.587: INFO: stderr: ""
Jan 28 00:15:22.587: INFO: stdout: "update-demo-nautilus-cnr9k update-demo-nautilus-xs4cc "
Jan 28 00:15:22.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnr9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:22.798: INFO: stderr: ""
Jan 28 00:15:22.798: INFO: stdout: ""
Jan 28 00:15:22.798: INFO: update-demo-nautilus-cnr9k is created but not running
Jan 28 00:15:27.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029'
Jan 28 00:15:28.336: INFO: stderr: ""
Jan 28 00:15:28.336: INFO: stdout: "update-demo-nautilus-cnr9k update-demo-nautilus-xs4cc "
Jan 28 00:15:28.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnr9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:29.041: INFO: stderr: ""
Jan 28 00:15:29.041: INFO: stdout: ""
Jan 28 00:15:29.041: INFO: update-demo-nautilus-cnr9k is created but not running
Jan 28 00:15:34.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029'
Jan 28 00:15:34.173: INFO: stderr: ""
Jan 28 00:15:34.173: INFO: stdout: "update-demo-nautilus-cnr9k update-demo-nautilus-xs4cc "
Jan 28 00:15:34.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnr9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:34.312: INFO: stderr: ""
Jan 28 00:15:34.312: INFO: stdout: "true"
Jan 28 00:15:34.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnr9k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:34.565: INFO: stderr: ""
Jan 28 00:15:34.565: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:15:34.565: INFO: validating pod update-demo-nautilus-cnr9k
Jan 28 00:15:34.589: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:15:34.589: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:15:34.589: INFO: update-demo-nautilus-cnr9k is verified up and running
Jan 28 00:15:34.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xs4cc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:34.696: INFO: stderr: ""
Jan 28 00:15:34.696: INFO: stdout: "true"
Jan 28 00:15:34.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xs4cc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:15:34.835: INFO: stderr: ""
Jan 28 00:15:34.835: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 00:15:34.835: INFO: validating pod update-demo-nautilus-xs4cc
Jan 28 00:15:34.842: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 00:15:34.842: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 00:15:34.842: INFO: update-demo-nautilus-xs4cc is verified up and running
STEP: rolling-update to new replication controller
Jan 28 00:15:34.844: INFO: scanned /root for discovery docs: 
Jan 28 00:15:34.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-5029'
Jan 28 00:16:04.788: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 28 00:16:04.788: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 00:16:04.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5029'
Jan 28 00:16:04.940: INFO: stderr: ""
Jan 28 00:16:04.940: INFO: stdout: "update-demo-kitten-h5j5p update-demo-kitten-nkq5f "
Jan 28 00:16:04.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5j5p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:16:05.095: INFO: stderr: ""
Jan 28 00:16:05.095: INFO: stdout: "true"
Jan 28 00:16:05.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h5j5p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:16:05.238: INFO: stderr: ""
Jan 28 00:16:05.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 28 00:16:05.238: INFO: validating pod update-demo-kitten-h5j5p
Jan 28 00:16:05.244: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 28 00:16:05.244: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 28 00:16:05.244: INFO: update-demo-kitten-h5j5p is verified up and running
Jan 28 00:16:05.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nkq5f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:16:05.377: INFO: stderr: ""
Jan 28 00:16:05.377: INFO: stdout: "true"
Jan 28 00:16:05.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-nkq5f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5029'
Jan 28 00:16:05.518: INFO: stderr: ""
Jan 28 00:16:05.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan 28 00:16:05.518: INFO: validating pod update-demo-kitten-nkq5f
Jan 28 00:16:05.529: INFO: got data: {
  "image": "kitten.jpg"
}

Jan 28 00:16:05.529: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan 28 00:16:05.529: INFO: update-demo-kitten-nkq5f is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:05.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5029" for this suite.

• [SLOW TEST:44.091 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":83,"skipped":1468,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:05.536: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:16:06.583: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:16:08.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:16:10.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:16:13.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767366, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:16:16.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:16:16.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1239-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:17.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3503" for this suite.
STEP: Destroying namespace "webhook-3503-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.364 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":84,"skipped":1478,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:17.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 28 00:16:28.841: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2405 pod-service-account-9cff6800-758d-4732-8033-a724a0b29943 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 28 00:16:31.335: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2405 pod-service-account-9cff6800-758d-4732-8033-a724a0b29943 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 28 00:16:31.731: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2405 pod-service-account-9cff6800-758d-4732-8033-a724a0b29943 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:32.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2405" for this suite.

• [SLOW TEST:14.259 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":280,"completed":85,"skipped":1512,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:32.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:38.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3270" for this suite.

• [SLOW TEST:6.449 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":86,"skipped":1560,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:38.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-ee29579e-ace0-48be-8d98-088bee458543
STEP: Creating a pod to test consume configMaps
Jan 28 00:16:38.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078" in namespace "configmap-3981" to be "success or failure"
Jan 28 00:16:38.730: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Pending", Reason="", readiness=false. Elapsed: 7.203847ms
Jan 28 00:16:40.736: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012927911s
Jan 28 00:16:42.744: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020980063s
Jan 28 00:16:44.752: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Pending", Reason="", readiness=false. Elapsed: 6.028600024s
Jan 28 00:16:46.757: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034163454s
Jan 28 00:16:48.787: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063862711s
STEP: Saw pod success
Jan 28 00:16:48.787: INFO: Pod "pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078" satisfied condition "success or failure"
Jan 28 00:16:48.792: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078 container configmap-volume-test: 
STEP: delete the pod
Jan 28 00:16:48.826: INFO: Waiting for pod pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078 to disappear
Jan 28 00:16:48.835: INFO: Pod pod-configmaps-f1705c7d-dd44-49c8-b071-8ab2bdeae078 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:48.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3981" for this suite.

• [SLOW TEST:10.235 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1644,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:48.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:16:55.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8182" for this suite.

• [SLOW TEST:6.314 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":88,"skipped":1672,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:16:55.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3153
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 28 00:16:55.322: INFO: Found 0 stateful pods, waiting for 3
Jan 28 00:17:05.329: INFO: Found 2 stateful pods, waiting for 3
Jan 28 00:17:15.327: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:17:15.327: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:17:15.327: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 28 00:17:25.327: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:17:25.327: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:17:25.327: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan 28 00:17:25.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 00:17:25.806: INFO: stderr: "I0128 00:17:25.589857    2355 log.go:172] (0xc0000fca50) (0xc00068dc20) Create stream\nI0128 00:17:25.590232    2355 log.go:172] (0xc0000fca50) (0xc00068dc20) Stream added, broadcasting: 1\nI0128 00:17:25.596948    2355 log.go:172] (0xc0000fca50) Reply frame received for 1\nI0128 00:17:25.597104    2355 log.go:172] (0xc0000fca50) (0xc0007860a0) Create stream\nI0128 00:17:25.597126    2355 log.go:172] (0xc0000fca50) (0xc0007860a0) Stream added, broadcasting: 3\nI0128 00:17:25.598941    2355 log.go:172] (0xc0000fca50) Reply frame received for 3\nI0128 00:17:25.598993    2355 log.go:172] (0xc0000fca50) (0xc00068de00) Create stream\nI0128 00:17:25.599002    2355 log.go:172] (0xc0000fca50) (0xc00068de00) Stream added, broadcasting: 5\nI0128 00:17:25.600536    2355 log.go:172] (0xc0000fca50) Reply frame received for 5\nI0128 00:17:25.672118    2355 log.go:172] (0xc0000fca50) Data frame received for 5\nI0128 00:17:25.672214    2355 log.go:172] (0xc00068de00) (5) Data frame handling\nI0128 00:17:25.672236    2355 log.go:172] (0xc00068de00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:17:25.714208    2355 log.go:172] (0xc0000fca50) Data frame received for 3\nI0128 00:17:25.714252    2355 log.go:172] (0xc0007860a0) (3) Data frame handling\nI0128 00:17:25.714270    2355 log.go:172] (0xc0007860a0) (3) Data frame sent\nI0128 00:17:25.794310    2355 log.go:172] (0xc0000fca50) Data frame received for 1\nI0128 00:17:25.794492    2355 log.go:172] (0xc0000fca50) (0xc0007860a0) Stream removed, broadcasting: 3\nI0128 00:17:25.794647    2355 log.go:172] (0xc00068dc20) (1) Data frame handling\nI0128 00:17:25.794774    2355 log.go:172] (0xc00068dc20) (1) Data frame sent\nI0128 00:17:25.795102    2355 log.go:172] (0xc0000fca50) (0xc00068de00) Stream removed, broadcasting: 5\nI0128 00:17:25.795649    2355 log.go:172] (0xc0000fca50) (0xc00068dc20) Stream removed, broadcasting: 1\nI0128 00:17:25.795702    2355 log.go:172] (0xc0000fca50) Go away received\nI0128 00:17:25.797961    2355 log.go:172] (0xc0000fca50) (0xc00068dc20) Stream removed, broadcasting: 1\nI0128 00:17:25.797993    2355 log.go:172] (0xc0000fca50) (0xc0007860a0) Stream removed, broadcasting: 3\nI0128 00:17:25.798001    2355 log.go:172] (0xc0000fca50) (0xc00068de00) Stream removed, broadcasting: 5\n"
Jan 28 00:17:25.807: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 00:17:25.807: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 28 00:17:35.854: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan 28 00:17:45.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 00:17:46.192: INFO: stderr: "I0128 00:17:46.042221    2376 log.go:172] (0xc000beadc0) (0xc000bc2460) Create stream\nI0128 00:17:46.042566    2376 log.go:172] (0xc000beadc0) (0xc000bc2460) Stream added, broadcasting: 1\nI0128 00:17:46.046192    2376 log.go:172] (0xc000beadc0) Reply frame received for 1\nI0128 00:17:46.046302    2376 log.go:172] (0xc000beadc0) (0xc00065e780) Create stream\nI0128 00:17:46.046322    2376 log.go:172] (0xc000beadc0) (0xc00065e780) Stream added, broadcasting: 3\nI0128 00:17:46.047763    2376 log.go:172] (0xc000beadc0) Reply frame received for 3\nI0128 00:17:46.047824    2376 log.go:172] (0xc000beadc0) (0xc0006b1b80) Create stream\nI0128 00:17:46.047860    2376 log.go:172] (0xc000beadc0) (0xc0006b1b80) Stream added, broadcasting: 5\nI0128 00:17:46.049312    2376 log.go:172] (0xc000beadc0) Reply frame received for 5\nI0128 00:17:46.115641    2376 log.go:172] (0xc000beadc0) Data frame received for 5\nI0128 00:17:46.115722    2376 log.go:172] (0xc0006b1b80) (5) Data frame handling\nI0128 00:17:46.115733    2376 log.go:172] (0xc0006b1b80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:17:46.115759    2376 log.go:172] (0xc000beadc0) Data frame received for 3\nI0128 00:17:46.115765    2376 log.go:172] (0xc00065e780) (3) Data frame handling\nI0128 00:17:46.115771    2376 log.go:172] (0xc00065e780) (3) Data frame sent\nI0128 00:17:46.181979    2376 log.go:172] (0xc000beadc0) (0xc00065e780) Stream removed, broadcasting: 3\nI0128 00:17:46.182132    2376 log.go:172] (0xc000beadc0) Data frame received for 1\nI0128 00:17:46.182146    2376 log.go:172] (0xc000bc2460) (1) Data frame handling\nI0128 00:17:46.182165    2376 log.go:172] (0xc000bc2460) (1) Data frame sent\nI0128 00:17:46.182200    2376 log.go:172] (0xc000beadc0) (0xc000bc2460) Stream removed, broadcasting: 1\nI0128 00:17:46.182372    2376 log.go:172] (0xc000beadc0) (0xc0006b1b80) Stream removed, broadcasting: 5\nI0128 00:17:46.182411    2376 log.go:172] (0xc000beadc0) Go away received\nI0128 00:17:46.182948    2376 log.go:172] (0xc000beadc0) (0xc000bc2460) Stream removed, broadcasting: 1\nI0128 00:17:46.182990    2376 log.go:172] (0xc000beadc0) (0xc00065e780) Stream removed, broadcasting: 3\nI0128 00:17:46.183016    2376 log.go:172] (0xc000beadc0) (0xc0006b1b80) Stream removed, broadcasting: 5\n"
Jan 28 00:17:46.192: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 00:17:46.192: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 00:17:56.220: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:17:56.220: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:17:56.220: INFO: Waiting for Pod statefulset-3153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:17:56.220: INFO: Waiting for Pod statefulset-3153/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:06.232: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:18:06.232: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:06.232: INFO: Waiting for Pod statefulset-3153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:16.230: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:18:16.230: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:16.230: INFO: Waiting for Pod statefulset-3153/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:26.228: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:18:26.228: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:36.232: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:18:36.232: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 28 00:18:46.233: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
STEP: Rolling back to a previous revision
Jan 28 00:18:56.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 28 00:18:56.807: INFO: stderr: "I0128 00:18:56.444197    2396 log.go:172] (0xc0009dedc0) (0xc000996140) Create stream\nI0128 00:18:56.444495    2396 log.go:172] (0xc0009dedc0) (0xc000996140) Stream added, broadcasting: 1\nI0128 00:18:56.453681    2396 log.go:172] (0xc0009dedc0) Reply frame received for 1\nI0128 00:18:56.453801    2396 log.go:172] (0xc0009dedc0) (0xc0009bc0a0) Create stream\nI0128 00:18:56.453837    2396 log.go:172] (0xc0009dedc0) (0xc0009bc0a0) Stream added, broadcasting: 3\nI0128 00:18:56.456296    2396 log.go:172] (0xc0009dedc0) Reply frame received for 3\nI0128 00:18:56.456334    2396 log.go:172] (0xc0009dedc0) (0xc0009961e0) Create stream\nI0128 00:18:56.456353    2396 log.go:172] (0xc0009dedc0) (0xc0009961e0) Stream added, broadcasting: 5\nI0128 00:18:56.464219    2396 log.go:172] (0xc0009dedc0) Reply frame received for 5\nI0128 00:18:56.607018    2396 log.go:172] (0xc0009dedc0) Data frame received for 5\nI0128 00:18:56.607316    2396 log.go:172] (0xc0009961e0) (5) Data frame handling\nI0128 00:18:56.607365    2396 log.go:172] (0xc0009961e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0128 00:18:56.709748    2396 log.go:172] (0xc0009dedc0) Data frame received for 3\nI0128 00:18:56.709845    2396 log.go:172] (0xc0009bc0a0) (3) Data frame handling\nI0128 00:18:56.709874    2396 log.go:172] (0xc0009bc0a0) (3) Data frame sent\nI0128 00:18:56.791440    2396 log.go:172] (0xc0009dedc0) Data frame received for 1\nI0128 00:18:56.791603    2396 log.go:172] (0xc000996140) (1) Data frame handling\nI0128 00:18:56.791653    2396 log.go:172] (0xc000996140) (1) Data frame sent\nI0128 00:18:56.791962    2396 log.go:172] (0xc0009dedc0) (0xc000996140) Stream removed, broadcasting: 1\nI0128 00:18:56.792381    2396 log.go:172] (0xc0009dedc0) (0xc0009bc0a0) Stream removed, broadcasting: 3\nI0128 00:18:56.792737    2396 log.go:172] (0xc0009dedc0) (0xc0009961e0) Stream removed, broadcasting: 5\nI0128 00:18:56.792872    2396 log.go:172] (0xc0009dedc0) (0xc000996140) Stream removed, broadcasting: 1\nI0128 00:18:56.792918    2396 log.go:172] (0xc0009dedc0) (0xc0009bc0a0) Stream removed, broadcasting: 3\nI0128 00:18:56.792943    2396 log.go:172] (0xc0009dedc0) (0xc0009961e0) Stream removed, broadcasting: 5\nI0128 00:18:56.793202    2396 log.go:172] (0xc0009dedc0) Go away received\n"
Jan 28 00:18:56.807: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 28 00:18:56.807: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 28 00:19:06.889: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan 28 00:19:16.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3153 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 28 00:19:17.360: INFO: stderr: "I0128 00:19:17.190713    2413 log.go:172] (0xc000b67290) (0xc000ad0820) Create stream\nI0128 00:19:17.190937    2413 log.go:172] (0xc000b67290) (0xc000ad0820) Stream added, broadcasting: 1\nI0128 00:19:17.197839    2413 log.go:172] (0xc000b67290) Reply frame received for 1\nI0128 00:19:17.197884    2413 log.go:172] (0xc000b67290) (0xc000727cc0) Create stream\nI0128 00:19:17.197893    2413 log.go:172] (0xc000b67290) (0xc000727cc0) Stream added, broadcasting: 3\nI0128 00:19:17.198856    2413 log.go:172] (0xc000b67290) Reply frame received for 3\nI0128 00:19:17.198909    2413 log.go:172] (0xc000b67290) (0xc0006d08c0) Create stream\nI0128 00:19:17.198934    2413 log.go:172] (0xc000b67290) (0xc0006d08c0) Stream added, broadcasting: 5\nI0128 00:19:17.201086    2413 log.go:172] (0xc000b67290) Reply frame received for 5\nI0128 00:19:17.269400    2413 log.go:172] (0xc000b67290) Data frame received for 3\nI0128 00:19:17.269682    2413 log.go:172] (0xc000727cc0) (3) Data frame handling\nI0128 00:19:17.269737    2413 log.go:172] (0xc000727cc0) (3) Data frame sent\nI0128 00:19:17.269863    2413 log.go:172] (0xc000b67290) Data frame received for 5\nI0128 00:19:17.269925    2413 log.go:172] (0xc0006d08c0) (5) Data frame handling\nI0128 00:19:17.269977    2413 log.go:172] (0xc0006d08c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0128 00:19:17.351761    2413 log.go:172] (0xc000b67290) (0xc000727cc0) Stream removed, broadcasting: 3\nI0128 00:19:17.351973    2413 log.go:172] (0xc000b67290) Data frame received for 1\nI0128 00:19:17.352015    2413 log.go:172] (0xc000ad0820) (1) Data frame handling\nI0128 00:19:17.352040    2413 log.go:172] (0xc000ad0820) (1) Data frame sent\nI0128 00:19:17.352076    2413 log.go:172] (0xc000b67290) (0xc0006d08c0) Stream removed, broadcasting: 5\nI0128 00:19:17.352155    2413 log.go:172] (0xc000b67290) (0xc000ad0820) Stream removed, broadcasting: 1\nI0128 00:19:17.352198    2413 log.go:172] (0xc000b67290) Go away received\nI0128 00:19:17.353228    2413 log.go:172] (0xc000b67290) (0xc000ad0820) Stream removed, broadcasting: 1\nI0128 00:19:17.353258    2413 log.go:172] (0xc000b67290) (0xc000727cc0) Stream removed, broadcasting: 3\nI0128 00:19:17.353268    2413 log.go:172] (0xc000b67290) (0xc0006d08c0) Stream removed, broadcasting: 5\n"
Jan 28 00:19:17.361: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 28 00:19:17.361: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 28 00:19:27.388: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:19:27.389: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:19:27.389: INFO: Waiting for Pod statefulset-3153/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:19:37.403: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:19:37.403: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:19:37.403: INFO: Waiting for Pod statefulset-3153/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:19:47.680: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:19:47.680: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:19:57.399: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
Jan 28 00:19:57.399: INFO: Waiting for Pod statefulset-3153/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jan 28 00:20:07.400: INFO: Waiting for StatefulSet statefulset-3153/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jan 28 00:20:17.397: INFO: Deleting all statefulset in ns statefulset-3153
Jan 28 00:20:17.399: INFO: Scaling statefulset ss2 to 0
Jan 28 00:20:57.431: INFO: Waiting for statefulset status.replicas updated to 0
Jan 28 00:20:57.436: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:20:57.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3153" for this suite.

• [SLOW TEST:242.357 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":89,"skipped":1684,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:20:57.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1906.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1906.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1906.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1906.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1906.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 00:21:09.980: INFO: DNS probes using dns-1906/dns-test-91d1274d-84d3-49c1-b000-282eda3cbd08 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:21:10.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1906" for this suite.

• [SLOW TEST:12.657 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":90,"skipped":1685,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:21:10.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 28 00:21:10.299: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:21:24.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8823" for this suite.

• [SLOW TEST:14.140 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":91,"skipped":1686,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:21:24.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-0b9bc553-1093-4d2c-bebf-e584fc2f3369
STEP: Creating a pod to test consume configMaps
Jan 28 00:21:24.426: INFO: Waiting up to 5m0s for pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b" in namespace "configmap-5802" to be "success or failure"
Jan 28 00:21:24.451: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.418227ms
Jan 28 00:21:26.463: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036507672s
Jan 28 00:21:28.471: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045325427s
Jan 28 00:21:30.479: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053131875s
Jan 28 00:21:32.486: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05979978s
STEP: Saw pod success
Jan 28 00:21:32.486: INFO: Pod "pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b" satisfied condition "success or failure"
Jan 28 00:21:32.489: INFO: Trying to get logs from node jerma-node pod pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b container configmap-volume-test: 
STEP: delete the pod
Jan 28 00:21:32.672: INFO: Waiting for pod pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b to disappear
Jan 28 00:21:32.677: INFO: Pod pod-configmaps-33d27868-2f5e-4bce-9abc-9ae16dfef89b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:21:32.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5802" for this suite.

• [SLOW TEST:8.365 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1700,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:21:32.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:21:33.575: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:21:35.590: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:21:37.599: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:21:39.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:21:41.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767693, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:21:44.748: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:21:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3664" for this suite.
STEP: Destroying namespace "webhook-3664-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.374 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":93,"skipped":1724,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:21:45.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5872
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-5872
I0128 00:21:45.218462       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5872, replica count: 2
I0128 00:21:48.269339       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:21:51.270000       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:21:54.270792       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:21:57.271282       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 00:21:57.271: INFO: Creating new exec pod
Jan 28 00:22:04.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5872 execpod645dc -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 28 00:22:04.783: INFO: stderr: "I0128 00:22:04.531596    2433 log.go:172] (0xc0003c8f20) (0xc000636000) Create stream\nI0128 00:22:04.532276    2433 log.go:172] (0xc0003c8f20) (0xc000636000) Stream added, broadcasting: 1\nI0128 00:22:04.540265    2433 log.go:172] (0xc0003c8f20) Reply frame received for 1\nI0128 00:22:04.540403    2433 log.go:172] (0xc0003c8f20) (0xc000a44000) Create stream\nI0128 00:22:04.540416    2433 log.go:172] (0xc0003c8f20) (0xc000a44000) Stream added, broadcasting: 3\nI0128 00:22:04.543696    2433 log.go:172] (0xc0003c8f20) Reply frame received for 3\nI0128 00:22:04.543894    2433 log.go:172] (0xc0003c8f20) (0xc0006360a0) Create stream\nI0128 00:22:04.543939    2433 log.go:172] (0xc0003c8f20) (0xc0006360a0) Stream added, broadcasting: 5\nI0128 00:22:04.548780    2433 log.go:172] (0xc0003c8f20) Reply frame received for 5\nI0128 00:22:04.661100    2433 log.go:172] (0xc0003c8f20) Data frame received for 5\nI0128 00:22:04.661209    2433 log.go:172] (0xc0006360a0) (5) Data frame handling\nI0128 00:22:04.661234    2433 log.go:172] (0xc0006360a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0128 00:22:04.676810    2433 log.go:172] (0xc0003c8f20) Data frame received for 5\nI0128 00:22:04.677018    2433 log.go:172] (0xc0006360a0) (5) Data frame handling\nI0128 00:22:04.677082    2433 log.go:172] (0xc0006360a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0128 00:22:04.760899    2433 log.go:172] (0xc0003c8f20) Data frame received for 1\nI0128 00:22:04.761232    2433 log.go:172] (0xc0003c8f20) (0xc000a44000) Stream removed, broadcasting: 3\nI0128 00:22:04.761435    2433 log.go:172] (0xc000636000) (1) Data frame handling\nI0128 00:22:04.761485    2433 log.go:172] (0xc000636000) (1) Data frame sent\nI0128 00:22:04.761600    2433 log.go:172] (0xc0003c8f20) (0xc0006360a0) Stream removed, broadcasting: 5\nI0128 00:22:04.761677    2433 log.go:172] (0xc0003c8f20) (0xc000636000) Stream removed, broadcasting: 1\nI0128 00:22:04.761724    2433 log.go:172] (0xc0003c8f20) Go away received\nI0128 00:22:04.763772    2433 log.go:172] (0xc0003c8f20) (0xc000636000) Stream removed, broadcasting: 1\nI0128 00:22:04.763806    2433 log.go:172] (0xc0003c8f20) (0xc000a44000) Stream removed, broadcasting: 3\nI0128 00:22:04.763819    2433 log.go:172] (0xc0003c8f20) (0xc0006360a0) Stream removed, broadcasting: 5\n"
Jan 28 00:22:04.783: INFO: stdout: ""
Jan 28 00:22:04.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5872 execpod645dc -- /bin/sh -x -c nc -zv -t -w 2 10.96.204.214 80'
Jan 28 00:22:05.190: INFO: stderr: "I0128 00:22:04.988793    2453 log.go:172] (0xc000a280b0) (0xc0001cd5e0) Create stream\nI0128 00:22:04.989021    2453 log.go:172] (0xc000a280b0) (0xc0001cd5e0) Stream added, broadcasting: 1\nI0128 00:22:04.994256    2453 log.go:172] (0xc000a280b0) Reply frame received for 1\nI0128 00:22:04.994609    2453 log.go:172] (0xc000a280b0) (0xc00085c000) Create stream\nI0128 00:22:04.994671    2453 log.go:172] (0xc000a280b0) (0xc00085c000) Stream added, broadcasting: 3\nI0128 00:22:04.996238    2453 log.go:172] (0xc000a280b0) Reply frame received for 3\nI0128 00:22:04.996293    2453 log.go:172] (0xc000a280b0) (0xc000a2a000) Create stream\nI0128 00:22:04.996317    2453 log.go:172] (0xc000a280b0) (0xc000a2a000) Stream added, broadcasting: 5\nI0128 00:22:04.999117    2453 log.go:172] (0xc000a280b0) Reply frame received for 5\nI0128 00:22:05.070423    2453 log.go:172] (0xc000a280b0) Data frame received for 5\nI0128 00:22:05.070618    2453 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0128 00:22:05.070666    2453 log.go:172] (0xc000a2a000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.204.214 80\nI0128 00:22:05.079940    2453 log.go:172] (0xc000a280b0) Data frame received for 5\nI0128 00:22:05.080143    2453 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0128 00:22:05.080206    2453 log.go:172] (0xc000a2a000) (5) Data frame sent\nConnection to 10.96.204.214 80 port [tcp/http] succeeded!\nI0128 00:22:05.175825    2453 log.go:172] (0xc000a280b0) (0xc000a2a000) Stream removed, broadcasting: 5\nI0128 00:22:05.176154    2453 log.go:172] (0xc000a280b0) Data frame received for 1\nI0128 00:22:05.176182    2453 log.go:172] (0xc000a280b0) (0xc00085c000) Stream removed, broadcasting: 3\nI0128 00:22:05.176288    2453 log.go:172] (0xc0001cd5e0) (1) Data frame handling\nI0128 00:22:05.176321    2453 log.go:172] (0xc0001cd5e0) (1) Data frame sent\nI0128 00:22:05.176338    2453 log.go:172] (0xc000a280b0) (0xc0001cd5e0) Stream removed, broadcasting: 1\nI0128 00:22:05.176367    2453 log.go:172] (0xc000a280b0) Go away received\nI0128 00:22:05.177953    2453 log.go:172] (0xc000a280b0) (0xc0001cd5e0) Stream removed, broadcasting: 1\nI0128 00:22:05.177972    2453 log.go:172] (0xc000a280b0) (0xc00085c000) Stream removed, broadcasting: 3\nI0128 00:22:05.177978    2453 log.go:172] (0xc000a280b0) (0xc000a2a000) Stream removed, broadcasting: 5\n"
Jan 28 00:22:05.190: INFO: stdout: ""
Jan 28 00:22:05.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5872 execpod645dc -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32645'
Jan 28 00:22:05.457: INFO: stderr: "I0128 00:22:05.325068    2473 log.go:172] (0xc000b35290) (0xc000b0e5a0) Create stream\nI0128 00:22:05.325232    2473 log.go:172] (0xc000b35290) (0xc000b0e5a0) Stream added, broadcasting: 1\nI0128 00:22:05.328233    2473 log.go:172] (0xc000b35290) Reply frame received for 1\nI0128 00:22:05.328271    2473 log.go:172] (0xc000b35290) (0xc000b0e640) Create stream\nI0128 00:22:05.328279    2473 log.go:172] (0xc000b35290) (0xc000b0e640) Stream added, broadcasting: 3\nI0128 00:22:05.329547    2473 log.go:172] (0xc000b35290) Reply frame received for 3\nI0128 00:22:05.329629    2473 log.go:172] (0xc000b35290) (0xc000b0e6e0) Create stream\nI0128 00:22:05.329638    2473 log.go:172] (0xc000b35290) (0xc000b0e6e0) Stream added, broadcasting: 5\nI0128 00:22:05.331143    2473 log.go:172] (0xc000b35290) Reply frame received for 5\nI0128 00:22:05.385810    2473 log.go:172] (0xc000b35290) Data frame received for 5\nI0128 00:22:05.385840    2473 log.go:172] (0xc000b0e6e0) (5) Data frame handling\nI0128 00:22:05.385855    2473 log.go:172] (0xc000b0e6e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32645\nI0128 00:22:05.390702    2473 log.go:172] (0xc000b35290) Data frame received for 5\nI0128 00:22:05.390787    2473 log.go:172] (0xc000b0e6e0) (5) Data frame handling\nI0128 00:22:05.390807    2473 log.go:172] (0xc000b0e6e0) (5) Data frame sent\nConnection to 10.96.2.250 32645 port [tcp/32645] succeeded!\nI0128 00:22:05.449094    2473 log.go:172] (0xc000b35290) (0xc000b0e6e0) Stream removed, broadcasting: 5\nI0128 00:22:05.449187    2473 log.go:172] (0xc000b35290) Data frame received for 1\nI0128 00:22:05.449218    2473 log.go:172] (0xc000b35290) (0xc000b0e640) Stream removed, broadcasting: 3\nI0128 00:22:05.449270    2473 log.go:172] (0xc000b0e5a0) (1) Data frame handling\nI0128 00:22:05.449284    2473 log.go:172] (0xc000b0e5a0) (1) Data frame sent\nI0128 00:22:05.449291    2473 log.go:172] (0xc000b35290) (0xc000b0e5a0) Stream removed, broadcasting: 1\nI0128 00:22:05.449299    2473 log.go:172] (0xc000b35290) Go away received\nI0128 00:22:05.450622    2473 log.go:172] (0xc000b35290) (0xc000b0e5a0) Stream removed, broadcasting: 1\nI0128 00:22:05.450640    2473 log.go:172] (0xc000b35290) (0xc000b0e640) Stream removed, broadcasting: 3\nI0128 00:22:05.450646    2473 log.go:172] (0xc000b35290) (0xc000b0e6e0) Stream removed, broadcasting: 5\n"
Jan 28 00:22:05.458: INFO: stdout: ""
Jan 28 00:22:05.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5872 execpod645dc -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32645'
Jan 28 00:22:05.797: INFO: stderr: "I0128 00:22:05.640255    2493 log.go:172] (0xc0009b1130) (0xc0009121e0) Create stream\nI0128 00:22:05.640469    2493 log.go:172] (0xc0009b1130) (0xc0009121e0) Stream added, broadcasting: 1\nI0128 00:22:05.657639    2493 log.go:172] (0xc0009b1130) Reply frame received for 1\nI0128 00:22:05.657759    2493 log.go:172] (0xc0009b1130) (0xc000647cc0) Create stream\nI0128 00:22:05.657769    2493 log.go:172] (0xc0009b1130) (0xc000647cc0) Stream added, broadcasting: 3\nI0128 00:22:05.659077    2493 log.go:172] (0xc0009b1130) Reply frame received for 3\nI0128 00:22:05.659102    2493 log.go:172] (0xc0009b1130) (0xc0005988c0) Create stream\nI0128 00:22:05.659111    2493 log.go:172] (0xc0009b1130) (0xc0005988c0) Stream added, broadcasting: 5\nI0128 00:22:05.660038    2493 log.go:172] (0xc0009b1130) Reply frame received for 5\nI0128 00:22:05.719616    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.719703    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\nI0128 00:22:05.719728    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\nI0128 00:22:05.719743    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.719756    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\n+ nc -zv -tI0128 00:22:05.719799    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\nI0128 00:22:05.719813    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.719841    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\nI0128 00:22:05.719878    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\nI0128 00:22:05.719901    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.719932    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\n -w 2I0128 00:22:05.719958    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\nI0128 00:22:05.719988    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.719998    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\nI0128 00:22:05.720010    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\n 10.96.1.234 32645I0128 00:22:05.720065    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.720078    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\nI0128 00:22:05.720100    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\n\nI0128 00:22:05.724299    2493 log.go:172] (0xc0009b1130) Data frame received for 5\nI0128 00:22:05.724343    2493 log.go:172] (0xc0005988c0) (5) Data frame handling\nI0128 00:22:05.724361    2493 log.go:172] (0xc0005988c0) (5) Data frame sent\nConnection to 10.96.1.234 32645 port [tcp/32645] succeeded!\nI0128 00:22:05.784958    2493 log.go:172] (0xc0009b1130) Data frame received for 1\nI0128 00:22:05.785338    2493 log.go:172] (0xc0009121e0) (1) Data frame handling\nI0128 00:22:05.785389    2493 log.go:172] (0xc0009121e0) (1) Data frame sent\nI0128 00:22:05.785488    2493 log.go:172] (0xc0009b1130) (0xc0009121e0) Stream removed, broadcasting: 1\nI0128 00:22:05.786105    2493 log.go:172] (0xc0009b1130) (0xc000647cc0) Stream removed, broadcasting: 3\nI0128 00:22:05.786642    2493 log.go:172] (0xc0009b1130) (0xc0005988c0) Stream removed, broadcasting: 5\nI0128 00:22:05.786756    2493 log.go:172] (0xc0009b1130) (0xc0009121e0) Stream removed, broadcasting: 1\nI0128 00:22:05.786778    2493 log.go:172] (0xc0009b1130) (0xc000647cc0) Stream removed, broadcasting: 3\nI0128 00:22:05.786790    2493 log.go:172] (0xc0009b1130) (0xc0005988c0) Stream removed, broadcasting: 5\nI0128 00:22:05.786831    2493 log.go:172] (0xc0009b1130) Go away received\n"
Jan 28 00:22:05.798: INFO: stdout: ""
Jan 28 00:22:05.798: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:05.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5872" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:20.851 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":94,"skipped":1745,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:05.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:22:06.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 28 00:22:09.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 create -f -'
Jan 28 00:22:11.423: INFO: stderr: ""
Jan 28 00:22:11.423: INFO: stdout: "e2e-test-crd-publish-openapi-2217-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 28 00:22:11.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 delete e2e-test-crd-publish-openapi-2217-crds test-foo'
Jan 28 00:22:11.662: INFO: stderr: ""
Jan 28 00:22:11.662: INFO: stdout: "e2e-test-crd-publish-openapi-2217-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 28 00:22:11.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 apply -f -'
Jan 28 00:22:12.117: INFO: stderr: ""
Jan 28 00:22:12.117: INFO: stdout: "e2e-test-crd-publish-openapi-2217-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 28 00:22:12.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 delete e2e-test-crd-publish-openapi-2217-crds test-foo'
Jan 28 00:22:12.379: INFO: stderr: ""
Jan 28 00:22:12.379: INFO: stdout: "e2e-test-crd-publish-openapi-2217-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 28 00:22:12.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 create -f -'
Jan 28 00:22:12.718: INFO: rc: 1
Jan 28 00:22:12.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 apply -f -'
Jan 28 00:22:13.178: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 28 00:22:13.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 create -f -'
Jan 28 00:22:13.522: INFO: rc: 1
Jan 28 00:22:13.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3948 apply -f -'
Jan 28 00:22:14.168: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 28 00:22:14.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2217-crds'
Jan 28 00:22:14.633: INFO: stderr: ""
Jan 28 00:22:14.634: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2217-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 28 00:22:14.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2217-crds.metadata'
Jan 28 00:22:14.963: INFO: stderr: ""
Jan 28 00:22:14.963: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2217-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 28 00:22:14.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2217-crds.spec'
Jan 28 00:22:15.499: INFO: stderr: ""
Jan 28 00:22:15.499: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2217-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 28 00:22:15.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2217-crds.spec.bars'
Jan 28 00:22:15.883: INFO: stderr: ""
Jan 28 00:22:15.883: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2217-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 28 00:22:15.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2217-crds.spec.bars2'
Jan 28 00:22:16.311: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:20.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3948" for this suite.

• [SLOW TEST:14.270 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":95,"skipped":1750,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:20.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:22:20.846: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:22:22.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:22:24.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:22:26.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715767740, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:22:29.908: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:30.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8285" for this suite.
STEP: Destroying namespace "webhook-8285-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.452 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":96,"skipped":1752,"failed":0}
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:30.639: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-2264/configmap-test-d75570c7-4879-44bd-baf6-828f4d4bda0d
STEP: Creating a pod to test consume configMaps
Jan 28 00:22:30.762: INFO: Waiting up to 5m0s for pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325" in namespace "configmap-2264" to be "success or failure"
Jan 28 00:22:30.789: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Pending", Reason="", readiness=false. Elapsed: 27.108208ms
Jan 28 00:22:32.794: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031766208s
Jan 28 00:22:34.799: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036859834s
Jan 28 00:22:36.805: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042531211s
Jan 28 00:22:38.810: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047983791s
Jan 28 00:22:40.817: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054416137s
STEP: Saw pod success
Jan 28 00:22:40.817: INFO: Pod "pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325" satisfied condition "success or failure"
Jan 28 00:22:40.821: INFO: Trying to get logs from node jerma-node pod pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325 container env-test: 
STEP: delete the pod
Jan 28 00:22:40.941: INFO: Waiting for pod pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325 to disappear
Jan 28 00:22:40.953: INFO: Pod pod-configmaps-36894dc4-36d5-4f9b-a68d-94656c18d325 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:40.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2264" for this suite.

• [SLOW TEST:10.336 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":97,"skipped":1760,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:40.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:22:41.145: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79" in namespace "downward-api-6838" to be "success or failure"
Jan 28 00:22:41.158: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79": Phase="Pending", Reason="", readiness=false. Elapsed: 13.59698ms
Jan 28 00:22:43.168: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023040996s
Jan 28 00:22:45.177: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031912781s
Jan 28 00:22:47.185: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039843422s
Jan 28 00:22:49.194: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049727641s
STEP: Saw pod success
Jan 28 00:22:49.195: INFO: Pod "downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79" satisfied condition "success or failure"
Jan 28 00:22:49.200: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79 container client-container: 
STEP: delete the pod
Jan 28 00:22:50.002: INFO: Waiting for pod downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79 to disappear
Jan 28 00:22:50.022: INFO: Pod downwardapi-volume-fcef1448-6a7a-4a53-9b4c-781fe3a17d79 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:50.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6838" for this suite.

• [SLOW TEST:9.061 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":98,"skipped":1798,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:50.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-d3346fcb-4ff5-4dde-b567-a8ce2d07bf78
STEP: Creating a pod to test consume secrets
Jan 28 00:22:50.144: INFO: Waiting up to 5m0s for pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818" in namespace "secrets-5313" to be "success or failure"
Jan 28 00:22:50.169: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818": Phase="Pending", Reason="", readiness=false. Elapsed: 25.163424ms
Jan 28 00:22:52.172: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028046992s
Jan 28 00:22:54.212: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068036983s
Jan 28 00:22:56.220: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076093883s
Jan 28 00:22:58.230: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085598716s
STEP: Saw pod success
Jan 28 00:22:58.230: INFO: Pod "pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818" satisfied condition "success or failure"
Jan 28 00:22:58.235: INFO: Trying to get logs from node jerma-node pod pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818 container secret-env-test: 
STEP: delete the pod
Jan 28 00:22:58.283: INFO: Waiting for pod pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818 to disappear
Jan 28 00:22:58.341: INFO: Pod pod-secrets-41952a85-930f-42be-bc99-7941bc2c4818 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:22:58.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5313" for this suite.

• [SLOW TEST:8.353 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":99,"skipped":1814,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:22:58.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1230 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1230;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1230 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1230;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1230.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1230.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1230.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1230.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1230.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1230.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1230.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.114_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1230 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1230;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1230 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1230;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1230.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1230.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1230.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1230.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1230.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1230.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1230.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1230.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1230.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.132.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.132.114_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 00:23:08.714: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.720: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.726: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.731: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.736: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.741: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.747: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.752: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.799: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.812: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.831: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.838: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.844: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.849: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.863: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:08.911: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:13.925: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.935: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.943: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.948: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.952: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.962: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:13.971: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.028: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.035: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.045: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.059: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.066: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.069: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.072: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.075: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:14.116: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:18.920: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.927: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.934: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.950: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.961: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.966: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.971: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:18.974: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.024: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.027: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.030: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.034: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.071: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.080: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.085: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.090: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:19.110: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:23.919: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.924: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.933: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.937: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.941: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.945: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.975: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.979: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.982: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.989: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:23.997: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:24.000: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:24.024: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:28.921: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.924: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.927: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.930: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.934: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.939: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.942: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.945: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.971: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.974: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.978: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.986: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.994: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:28.997: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:29.015: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:33.922: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.928: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.942: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.950: INFO: Unable to read wheezy_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.968: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:33.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.016: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.020: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.026: INFO: Unable to read jessie_udp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.043: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230 from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.048: INFO: Unable to read jessie_udp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.053: INFO: Unable to read jessie_tcp@dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.060: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.067: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc from pod dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf: the server could not find the requested resource (get pods dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf)
Jan 28 00:23:34.090: INFO: Lookups using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1230 wheezy_tcp@dns-test-service.dns-1230 wheezy_udp@dns-test-service.dns-1230.svc wheezy_tcp@dns-test-service.dns-1230.svc wheezy_udp@_http._tcp.dns-test-service.dns-1230.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1230.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1230 jessie_tcp@dns-test-service.dns-1230 jessie_udp@dns-test-service.dns-1230.svc jessie_tcp@dns-test-service.dns-1230.svc jessie_udp@_http._tcp.dns-test-service.dns-1230.svc jessie_tcp@_http._tcp.dns-test-service.dns-1230.svc]

Jan 28 00:23:39.043: INFO: DNS probes using dns-1230/dns-test-2afc9ce4-e554-4da7-b4ea-3daa6f168dcf succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:23:39.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1230" for this suite.

• [SLOW TEST:40.992 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":100,"skipped":1835,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:23:39.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-a228811e-991c-407a-8af2-1e18a78794ca
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-a228811e-991c-407a-8af2-1e18a78794ca
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:23:49.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3616" for this suite.

• [SLOW TEST:10.431 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":101,"skipped":1841,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:23:49.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-e9911048-320f-4f2d-aac5-ca9fddd57e39
STEP: Creating a pod to test consume configMaps
Jan 28 00:23:49.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99" in namespace "projected-6852" to be "success or failure"
Jan 28 00:23:49.975: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99": Phase="Pending", Reason="", readiness=false. Elapsed: 28.892642ms
Jan 28 00:23:52.452: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.50549874s
Jan 28 00:23:54.465: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99": Phase="Pending", Reason="", readiness=false. Elapsed: 4.5182129s
Jan 28 00:23:56.471: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99": Phase="Pending", Reason="", readiness=false. Elapsed: 6.524647825s
Jan 28 00:23:58.477: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.530937563s
STEP: Saw pod success
Jan 28 00:23:58.478: INFO: Pod "pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99" satisfied condition "success or failure"
Jan 28 00:23:58.481: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 00:23:58.667: INFO: Waiting for pod pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99 to disappear
Jan 28 00:23:58.674: INFO: Pod pod-projected-configmaps-14a087a8-9922-4d07-9096-6667a1044e99 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:23:58.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6852" for this suite.

• [SLOW TEST:8.865 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":102,"skipped":1892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:23:58.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 28 00:23:58.859: INFO: Waiting up to 5m0s for pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600" in namespace "emptydir-403" to be "success or failure"
Jan 28 00:23:58.902: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600": Phase="Pending", Reason="", readiness=false. Elapsed: 42.746101ms
Jan 28 00:24:00.906: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047009718s
Jan 28 00:24:02.913: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054217509s
Jan 28 00:24:04.923: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063662647s
Jan 28 00:24:06.928: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068425429s
STEP: Saw pod success
Jan 28 00:24:06.928: INFO: Pod "pod-15c29f75-d336-4470-ad83-de0f5d4f7600" satisfied condition "success or failure"
Jan 28 00:24:06.930: INFO: Trying to get logs from node jerma-node pod pod-15c29f75-d336-4470-ad83-de0f5d4f7600 container test-container: 
STEP: delete the pod
Jan 28 00:24:06.970: INFO: Waiting for pod pod-15c29f75-d336-4470-ad83-de0f5d4f7600 to disappear
Jan 28 00:24:06.986: INFO: Pod pod-15c29f75-d336-4470-ad83-de0f5d4f7600 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:24:06.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-403" for this suite.

• [SLOW TEST:8.314 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1921,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:24:07.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-7b9dcf58-5de2-48b9-8764-f5e8d7747704
STEP: Creating a pod to test consume configMaps
Jan 28 00:24:07.305: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606" in namespace "configmap-9860" to be "success or failure"
Jan 28 00:24:07.323: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606": Phase="Pending", Reason="", readiness=false. Elapsed: 18.036746ms
Jan 28 00:24:09.431: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126299297s
Jan 28 00:24:11.439: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134530432s
Jan 28 00:24:13.447: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142501939s
Jan 28 00:24:15.455: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150151352s
STEP: Saw pod success
Jan 28 00:24:15.455: INFO: Pod "pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606" satisfied condition "success or failure"
Jan 28 00:24:15.463: INFO: Trying to get logs from node jerma-node pod pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606 container configmap-volume-test: 
STEP: delete the pod
Jan 28 00:24:15.523: INFO: Waiting for pod pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606 to disappear
Jan 28 00:24:15.577: INFO: Pod pod-configmaps-9f123c22-5a3d-4adb-9bf6-962c16d97606 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:24:15.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9860" for this suite.

• [SLOW TEST:8.596 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":104,"skipped":1923,"failed":0}
S
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:24:15.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-1331
STEP: creating replication controller nodeport-test in namespace services-1331
I0128 00:24:16.023249       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1331, replica count: 2
I0128 00:24:19.074403       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:24:22.074789       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:24:25.075317       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 00:24:25.075: INFO: Creating new exec pod
Jan 28 00:24:34.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1331 execpodp9d2f -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 28 00:24:34.578: INFO: stderr: "I0128 00:24:34.354752    2798 log.go:172] (0xc0008f96b0) (0xc00090e820) Create stream\nI0128 00:24:34.355150    2798 log.go:172] (0xc0008f96b0) (0xc00090e820) Stream added, broadcasting: 1\nI0128 00:24:34.361570    2798 log.go:172] (0xc0008f96b0) Reply frame received for 1\nI0128 00:24:34.361652    2798 log.go:172] (0xc0008f96b0) (0xc000665c20) Create stream\nI0128 00:24:34.361668    2798 log.go:172] (0xc0008f96b0) (0xc000665c20) Stream added, broadcasting: 3\nI0128 00:24:34.362868    2798 log.go:172] (0xc0008f96b0) Reply frame received for 3\nI0128 00:24:34.362895    2798 log.go:172] (0xc0008f96b0) (0xc000604820) Create stream\nI0128 00:24:34.362904    2798 log.go:172] (0xc0008f96b0) (0xc000604820) Stream added, broadcasting: 5\nI0128 00:24:34.364312    2798 log.go:172] (0xc0008f96b0) Reply frame received for 5\nI0128 00:24:34.444723    2798 log.go:172] (0xc0008f96b0) Data frame received for 5\nI0128 00:24:34.444897    2798 log.go:172] (0xc000604820) (5) Data frame handling\nI0128 00:24:34.444930    2798 log.go:172] (0xc000604820) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0128 00:24:34.459439    2798 log.go:172] (0xc0008f96b0) Data frame received for 5\nI0128 00:24:34.459959    2798 log.go:172] (0xc000604820) (5) Data frame handling\nI0128 00:24:34.460149    2798 log.go:172] (0xc000604820) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0128 00:24:34.561065    2798 log.go:172] (0xc0008f96b0) Data frame received for 1\nI0128 00:24:34.561522    2798 log.go:172] (0xc00090e820) (1) Data frame handling\nI0128 00:24:34.561631    2798 log.go:172] (0xc00090e820) (1) Data frame sent\nI0128 00:24:34.565818    2798 log.go:172] (0xc0008f96b0) (0xc000604820) Stream removed, broadcasting: 5\nI0128 00:24:34.565968    2798 log.go:172] (0xc0008f96b0) (0xc00090e820) Stream removed, broadcasting: 1\nI0128 00:24:34.566082    2798 log.go:172] (0xc0008f96b0) (0xc000665c20) Stream removed, broadcasting: 3\nI0128 00:24:34.566121    2798 log.go:172] (0xc0008f96b0) Go away received\nI0128 00:24:34.567211    2798 log.go:172] (0xc0008f96b0) (0xc00090e820) Stream removed, broadcasting: 1\nI0128 00:24:34.567227    2798 log.go:172] (0xc0008f96b0) (0xc000665c20) Stream removed, broadcasting: 3\nI0128 00:24:34.567233    2798 log.go:172] (0xc0008f96b0) (0xc000604820) Stream removed, broadcasting: 5\n"
Jan 28 00:24:34.578: INFO: stdout: ""
Jan 28 00:24:34.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1331 execpodp9d2f -- /bin/sh -x -c nc -zv -t -w 2 10.96.168.113 80'
Jan 28 00:24:34.930: INFO: stderr: "I0128 00:24:34.737077    2818 log.go:172] (0xc0001109a0) (0xc000439540) Create stream\nI0128 00:24:34.737330    2818 log.go:172] (0xc0001109a0) (0xc000439540) Stream added, broadcasting: 1\nI0128 00:24:34.741625    2818 log.go:172] (0xc0001109a0) Reply frame received for 1\nI0128 00:24:34.741676    2818 log.go:172] (0xc0001109a0) (0xc000637c20) Create stream\nI0128 00:24:34.741693    2818 log.go:172] (0xc0001109a0) (0xc000637c20) Stream added, broadcasting: 3\nI0128 00:24:34.743006    2818 log.go:172] (0xc0001109a0) Reply frame received for 3\nI0128 00:24:34.743044    2818 log.go:172] (0xc0001109a0) (0xc0009c2000) Create stream\nI0128 00:24:34.743061    2818 log.go:172] (0xc0001109a0) (0xc0009c2000) Stream added, broadcasting: 5\nI0128 00:24:34.745303    2818 log.go:172] (0xc0001109a0) Reply frame received for 5\nI0128 00:24:34.819412    2818 log.go:172] (0xc0001109a0) Data frame received for 5\nI0128 00:24:34.819503    2818 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0128 00:24:34.819542    2818 log.go:172] (0xc0009c2000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.168.113 80\nI0128 00:24:34.822529    2818 log.go:172] (0xc0001109a0) Data frame received for 5\nI0128 00:24:34.822559    2818 log.go:172] (0xc0009c2000) (5) Data frame handling\nI0128 00:24:34.822579    2818 log.go:172] (0xc0009c2000) (5) Data frame sent\nConnection to 10.96.168.113 80 port [tcp/http] succeeded!\nI0128 00:24:34.911287    2818 log.go:172] (0xc0001109a0) Data frame received for 1\nI0128 00:24:34.911537    2818 log.go:172] (0xc000439540) (1) Data frame handling\nI0128 00:24:34.911635    2818 log.go:172] (0xc000439540) (1) Data frame sent\nI0128 00:24:34.911754    2818 log.go:172] (0xc0001109a0) (0xc000439540) Stream removed, broadcasting: 1\nI0128 00:24:34.915640    2818 log.go:172] (0xc0001109a0) (0xc000637c20) Stream removed, broadcasting: 3\nI0128 00:24:34.916266    2818 log.go:172] (0xc0001109a0) (0xc0009c2000) Stream removed, broadcasting: 5\nI0128 00:24:34.916484    2818 log.go:172] (0xc0001109a0) (0xc000439540) Stream removed, broadcasting: 1\nI0128 00:24:34.916579    2818 log.go:172] (0xc0001109a0) (0xc000637c20) Stream removed, broadcasting: 3\nI0128 00:24:34.916641    2818 log.go:172] (0xc0001109a0) (0xc0009c2000) Stream removed, broadcasting: 5\n"
Jan 28 00:24:34.930: INFO: stdout: ""
Jan 28 00:24:34.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1331 execpodp9d2f -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 30042'
Jan 28 00:24:35.320: INFO: stderr: "I0128 00:24:35.151339    2840 log.go:172] (0xc000c15080) (0xc000a48280) Create stream\nI0128 00:24:35.151546    2840 log.go:172] (0xc000c15080) (0xc000a48280) Stream added, broadcasting: 1\nI0128 00:24:35.154667    2840 log.go:172] (0xc000c15080) Reply frame received for 1\nI0128 00:24:35.154699    2840 log.go:172] (0xc000c15080) (0xc000bec1e0) Create stream\nI0128 00:24:35.154708    2840 log.go:172] (0xc000c15080) (0xc000bec1e0) Stream added, broadcasting: 3\nI0128 00:24:35.155774    2840 log.go:172] (0xc000c15080) Reply frame received for 3\nI0128 00:24:35.155796    2840 log.go:172] (0xc000c15080) (0xc000a48320) Create stream\nI0128 00:24:35.155806    2840 log.go:172] (0xc000c15080) (0xc000a48320) Stream added, broadcasting: 5\nI0128 00:24:35.157296    2840 log.go:172] (0xc000c15080) Reply frame received for 5\nI0128 00:24:35.232137    2840 log.go:172] (0xc000c15080) Data frame received for 5\nI0128 00:24:35.232205    2840 log.go:172] (0xc000a48320) (5) Data frame handling\nI0128 00:24:35.232220    2840 log.go:172] (0xc000a48320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 30042\nI0128 00:24:35.237680    2840 log.go:172] (0xc000c15080) Data frame received for 5\nI0128 00:24:35.237710    2840 log.go:172] (0xc000a48320) (5) Data frame handling\nI0128 00:24:35.237724    2840 log.go:172] (0xc000a48320) (5) Data frame sent\nConnection to 10.96.2.250 30042 port [tcp/30042] succeeded!\nI0128 00:24:35.308234    2840 log.go:172] (0xc000c15080) Data frame received for 1\nI0128 00:24:35.308396    2840 log.go:172] (0xc000a48280) (1) Data frame handling\nI0128 00:24:35.308474    2840 log.go:172] (0xc000a48280) (1) Data frame sent\nI0128 00:24:35.309089    2840 log.go:172] (0xc000c15080) (0xc000a48280) Stream removed, broadcasting: 1\nI0128 00:24:35.312063    2840 log.go:172] (0xc000c15080) (0xc000bec1e0) Stream removed, broadcasting: 3\nI0128 00:24:35.312432    2840 log.go:172] (0xc000c15080) (0xc000a48320) Stream removed, broadcasting: 5\nI0128 00:24:35.312471    2840 log.go:172] (0xc000c15080) Go away received\nI0128 00:24:35.312902    2840 log.go:172] (0xc000c15080) (0xc000a48280) Stream removed, broadcasting: 1\nI0128 00:24:35.312923    2840 log.go:172] (0xc000c15080) (0xc000bec1e0) Stream removed, broadcasting: 3\nI0128 00:24:35.312929    2840 log.go:172] (0xc000c15080) (0xc000a48320) Stream removed, broadcasting: 5\n"
Jan 28 00:24:35.320: INFO: stdout: ""
Jan 28 00:24:35.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1331 execpodp9d2f -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 30042'
Jan 28 00:24:35.763: INFO: stderr: "I0128 00:24:35.579925    2860 log.go:172] (0xc000028000) (0xc00069c780) Create stream\nI0128 00:24:35.580214    2860 log.go:172] (0xc000028000) (0xc00069c780) Stream added, broadcasting: 1\nI0128 00:24:35.586231    2860 log.go:172] (0xc000028000) Reply frame received for 1\nI0128 00:24:35.586292    2860 log.go:172] (0xc000028000) (0xc0004f9400) Create stream\nI0128 00:24:35.586305    2860 log.go:172] (0xc000028000) (0xc0004f9400) Stream added, broadcasting: 3\nI0128 00:24:35.587659    2860 log.go:172] (0xc000028000) Reply frame received for 3\nI0128 00:24:35.587699    2860 log.go:172] (0xc000028000) (0xc0004f94a0) Create stream\nI0128 00:24:35.587710    2860 log.go:172] (0xc000028000) (0xc0004f94a0) Stream added, broadcasting: 5\nI0128 00:24:35.588842    2860 log.go:172] (0xc000028000) Reply frame received for 5\nI0128 00:24:35.691145    2860 log.go:172] (0xc000028000) Data frame received for 5\nI0128 00:24:35.691258    2860 log.go:172] (0xc0004f94a0) (5) Data frame handling\nI0128 00:24:35.691288    2860 log.go:172] (0xc0004f94a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 30042\nI0128 00:24:35.692936    2860 log.go:172] (0xc000028000) Data frame received for 5\nI0128 00:24:35.692953    2860 log.go:172] (0xc0004f94a0) (5) Data frame handling\nI0128 00:24:35.692965    2860 log.go:172] (0xc0004f94a0) (5) Data frame sent\nConnection to 10.96.1.234 30042 port [tcp/30042] succeeded!\nI0128 00:24:35.751713    2860 log.go:172] (0xc000028000) Data frame received for 1\nI0128 00:24:35.751844    2860 log.go:172] (0xc000028000) (0xc0004f94a0) Stream removed, broadcasting: 5\nI0128 00:24:35.751919    2860 log.go:172] (0xc00069c780) (1) Data frame handling\nI0128 00:24:35.751965    2860 log.go:172] (0xc00069c780) (1) Data frame sent\nI0128 00:24:35.752020    2860 log.go:172] (0xc000028000) (0xc0004f9400) Stream removed, broadcasting: 3\nI0128 00:24:35.752104    2860 log.go:172] (0xc000028000) (0xc00069c780) Stream removed, broadcasting: 1\nI0128 00:24:35.752135    2860 log.go:172] (0xc000028000) Go away received\nI0128 00:24:35.753766    2860 log.go:172] (0xc000028000) (0xc00069c780) Stream removed, broadcasting: 1\nI0128 00:24:35.753796    2860 log.go:172] (0xc000028000) (0xc0004f9400) Stream removed, broadcasting: 3\nI0128 00:24:35.753811    2860 log.go:172] (0xc000028000) (0xc0004f94a0) Stream removed, broadcasting: 5\n"
Jan 28 00:24:35.763: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:24:35.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1331" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:20.176 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":105,"skipped":1924,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:24:35.774: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:24:35.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3817" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":106,"skipped":1937,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:24:35.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-2410888f-770c-4c99-8a31-e8dd30a40f7e
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-2410888f-770c-4c99-8a31-e8dd30a40f7e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:24:54.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1244" for this suite.

• [SLOW TEST:18.273 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1956,"failed":0}
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:24:54.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 28 00:24:54.378: INFO: Waiting up to 5m0s for pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479" in namespace "downward-api-5551" to be "success or failure"
Jan 28 00:24:54.404: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479": Phase="Pending", Reason="", readiness=false. Elapsed: 25.735012ms
Jan 28 00:24:56.412: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033783289s
Jan 28 00:24:58.426: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047638604s
Jan 28 00:25:00.432: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053715937s
Jan 28 00:25:02.442: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063569504s
STEP: Saw pod success
Jan 28 00:25:02.442: INFO: Pod "downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479" satisfied condition "success or failure"
Jan 28 00:25:02.448: INFO: Trying to get logs from node jerma-node pod downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479 container dapi-container: 
STEP: delete the pod
Jan 28 00:25:02.594: INFO: Waiting for pod downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479 to disappear
Jan 28 00:25:02.601: INFO: Pod downward-api-080136ef-8e09-4b24-b8e9-e3e28981f479 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:25:02.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5551" for this suite.

• [SLOW TEST:8.351 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1956,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:25:02.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Jan 28 00:25:10.815: INFO: Pod pod-hostip-f2b5d393-22ac-4aea-b808-e53f931c34af has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:25:10.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6932" for this suite.

• [SLOW TEST:8.216 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":109,"skipped":1970,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:25:10.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-2260
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 00:25:10.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 28 00:25:11.243: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 00:25:13.282: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 00:25:15.247: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 00:25:17.479: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 00:25:19.252: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 00:25:21.248: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:23.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:25.252: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:27.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:29.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:31.250: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:33.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:35.249: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 00:25:37.248: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 28 00:25:37.254: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 28 00:25:45.283: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-2260 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:25:45.283: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:25:45.346049       9 log.go:172] (0xc002226000) (0xc001e2c3c0) Create stream
I0128 00:25:45.346187       9 log.go:172] (0xc002226000) (0xc001e2c3c0) Stream added, broadcasting: 1
I0128 00:25:45.354059       9 log.go:172] (0xc002226000) Reply frame received for 1
I0128 00:25:45.354173       9 log.go:172] (0xc002226000) (0xc0029a0460) Create stream
I0128 00:25:45.354205       9 log.go:172] (0xc002226000) (0xc0029a0460) Stream added, broadcasting: 3
I0128 00:25:45.356861       9 log.go:172] (0xc002226000) Reply frame received for 3
I0128 00:25:45.356917       9 log.go:172] (0xc002226000) (0xc001e2c5a0) Create stream
I0128 00:25:45.356933       9 log.go:172] (0xc002226000) (0xc001e2c5a0) Stream added, broadcasting: 5
I0128 00:25:45.360855       9 log.go:172] (0xc002226000) Reply frame received for 5
I0128 00:25:45.462146       9 log.go:172] (0xc002226000) Data frame received for 3
I0128 00:25:45.462189       9 log.go:172] (0xc0029a0460) (3) Data frame handling
I0128 00:25:45.462221       9 log.go:172] (0xc0029a0460) (3) Data frame sent
I0128 00:25:45.522869       9 log.go:172] (0xc002226000) (0xc0029a0460) Stream removed, broadcasting: 3
I0128 00:25:45.522948       9 log.go:172] (0xc002226000) Data frame received for 1
I0128 00:25:45.522959       9 log.go:172] (0xc001e2c3c0) (1) Data frame handling
I0128 00:25:45.522971       9 log.go:172] (0xc001e2c3c0) (1) Data frame sent
I0128 00:25:45.522981       9 log.go:172] (0xc002226000) (0xc001e2c3c0) Stream removed, broadcasting: 1
I0128 00:25:45.523339       9 log.go:172] (0xc002226000) (0xc001e2c5a0) Stream removed, broadcasting: 5
I0128 00:25:45.523367       9 log.go:172] (0xc002226000) Go away received
I0128 00:25:45.523398       9 log.go:172] (0xc002226000) (0xc001e2c3c0) Stream removed, broadcasting: 1
I0128 00:25:45.523409       9 log.go:172] (0xc002226000) (0xc0029a0460) Stream removed, broadcasting: 3
I0128 00:25:45.523417       9 log.go:172] (0xc002226000) (0xc001e2c5a0) Stream removed, broadcasting: 5
Jan 28 00:25:45.523: INFO: Waiting for responses: map[]
Jan 28 00:25:45.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-2260 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:25:45.533: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:25:45.592111       9 log.go:172] (0xc002e8a420) (0xc001714460) Create stream
I0128 00:25:45.592179       9 log.go:172] (0xc002e8a420) (0xc001714460) Stream added, broadcasting: 1
I0128 00:25:45.596998       9 log.go:172] (0xc002e8a420) Reply frame received for 1
I0128 00:25:45.597071       9 log.go:172] (0xc002e8a420) (0xc001714500) Create stream
I0128 00:25:45.597089       9 log.go:172] (0xc002e8a420) (0xc001714500) Stream added, broadcasting: 3
I0128 00:25:45.598973       9 log.go:172] (0xc002e8a420) Reply frame received for 3
I0128 00:25:45.599022       9 log.go:172] (0xc002e8a420) (0xc0023e6000) Create stream
I0128 00:25:45.599045       9 log.go:172] (0xc002e8a420) (0xc0023e6000) Stream added, broadcasting: 5
I0128 00:25:45.601201       9 log.go:172] (0xc002e8a420) Reply frame received for 5
I0128 00:25:45.696479       9 log.go:172] (0xc002e8a420) Data frame received for 3
I0128 00:25:45.696558       9 log.go:172] (0xc001714500) (3) Data frame handling
I0128 00:25:45.696589       9 log.go:172] (0xc001714500) (3) Data frame sent
I0128 00:25:45.795528       9 log.go:172] (0xc002e8a420) (0xc0023e6000) Stream removed, broadcasting: 5
I0128 00:25:45.795742       9 log.go:172] (0xc002e8a420) Data frame received for 1
I0128 00:25:45.795812       9 log.go:172] (0xc002e8a420) (0xc001714500) Stream removed, broadcasting: 3
I0128 00:25:45.795867       9 log.go:172] (0xc001714460) (1) Data frame handling
I0128 00:25:45.795896       9 log.go:172] (0xc001714460) (1) Data frame sent
I0128 00:25:45.795927       9 log.go:172] (0xc002e8a420) (0xc001714460) Stream removed, broadcasting: 1
I0128 00:25:45.795947       9 log.go:172] (0xc002e8a420) Go away received
I0128 00:25:45.796324       9 log.go:172] (0xc002e8a420) (0xc001714460) Stream removed, broadcasting: 1
I0128 00:25:45.796355       9 log.go:172] (0xc002e8a420) (0xc001714500) Stream removed, broadcasting: 3
I0128 00:25:45.796367       9 log.go:172] (0xc002e8a420) (0xc0023e6000) Stream removed, broadcasting: 5
Jan 28 00:25:45.796: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:25:45.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2260" for this suite.

• [SLOW TEST:34.977 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":110,"skipped":1987,"failed":0}
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:25:45.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 28 00:26:12.013: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:12.013: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:12.083105       9 log.go:172] (0xc0022269a0) (0xc001bb8dc0) Create stream
I0128 00:26:12.083508       9 log.go:172] (0xc0022269a0) (0xc001bb8dc0) Stream added, broadcasting: 1
I0128 00:26:12.091926       9 log.go:172] (0xc0022269a0) Reply frame received for 1
I0128 00:26:12.092059       9 log.go:172] (0xc0022269a0) (0xc0029a1720) Create stream
I0128 00:26:12.092078       9 log.go:172] (0xc0022269a0) (0xc0029a1720) Stream added, broadcasting: 3
I0128 00:26:12.094896       9 log.go:172] (0xc0022269a0) Reply frame received for 3
I0128 00:26:12.094952       9 log.go:172] (0xc0022269a0) (0xc0021e8140) Create stream
I0128 00:26:12.094966       9 log.go:172] (0xc0022269a0) (0xc0021e8140) Stream added, broadcasting: 5
I0128 00:26:12.096859       9 log.go:172] (0xc0022269a0) Reply frame received for 5
I0128 00:26:12.191011       9 log.go:172] (0xc0022269a0) Data frame received for 3
I0128 00:26:12.191137       9 log.go:172] (0xc0029a1720) (3) Data frame handling
I0128 00:26:12.191159       9 log.go:172] (0xc0029a1720) (3) Data frame sent
I0128 00:26:12.338218       9 log.go:172] (0xc0022269a0) (0xc0029a1720) Stream removed, broadcasting: 3
I0128 00:26:12.338401       9 log.go:172] (0xc0022269a0) Data frame received for 1
I0128 00:26:12.338427       9 log.go:172] (0xc001bb8dc0) (1) Data frame handling
I0128 00:26:12.338442       9 log.go:172] (0xc001bb8dc0) (1) Data frame sent
I0128 00:26:12.338453       9 log.go:172] (0xc0022269a0) (0xc001bb8dc0) Stream removed, broadcasting: 1
I0128 00:26:12.338530       9 log.go:172] (0xc0022269a0) (0xc0021e8140) Stream removed, broadcasting: 5
I0128 00:26:12.338581       9 log.go:172] (0xc0022269a0) Go away received
I0128 00:26:12.338744       9 log.go:172] (0xc0022269a0) (0xc001bb8dc0) Stream removed, broadcasting: 1
I0128 00:26:12.338759       9 log.go:172] (0xc0022269a0) (0xc0029a1720) Stream removed, broadcasting: 3
I0128 00:26:12.338770       9 log.go:172] (0xc0022269a0) (0xc0021e8140) Stream removed, broadcasting: 5
Jan 28 00:26:12.338: INFO: Exec stderr: ""
Jan 28 00:26:12.338: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:12.338: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:12.377234       9 log.go:172] (0xc002e8ab00) (0xc001714820) Create stream
I0128 00:26:12.377404       9 log.go:172] (0xc002e8ab00) (0xc001714820) Stream added, broadcasting: 1
I0128 00:26:12.383151       9 log.go:172] (0xc002e8ab00) Reply frame received for 1
I0128 00:26:12.383225       9 log.go:172] (0xc002e8ab00) (0xc0021e8280) Create stream
I0128 00:26:12.383236       9 log.go:172] (0xc002e8ab00) (0xc0021e8280) Stream added, broadcasting: 3
I0128 00:26:12.384708       9 log.go:172] (0xc002e8ab00) Reply frame received for 3
I0128 00:26:12.384727       9 log.go:172] (0xc002e8ab00) (0xc0023e60a0) Create stream
I0128 00:26:12.384736       9 log.go:172] (0xc002e8ab00) (0xc0023e60a0) Stream added, broadcasting: 5
I0128 00:26:12.387997       9 log.go:172] (0xc002e8ab00) Reply frame received for 5
I0128 00:26:12.496627       9 log.go:172] (0xc002e8ab00) Data frame received for 3
I0128 00:26:12.496688       9 log.go:172] (0xc0021e8280) (3) Data frame handling
I0128 00:26:12.496710       9 log.go:172] (0xc0021e8280) (3) Data frame sent
I0128 00:26:12.613998       9 log.go:172] (0xc002e8ab00) Data frame received for 1
I0128 00:26:12.614150       9 log.go:172] (0xc002e8ab00) (0xc0021e8280) Stream removed, broadcasting: 3
I0128 00:26:12.614218       9 log.go:172] (0xc001714820) (1) Data frame handling
I0128 00:26:12.614249       9 log.go:172] (0xc001714820) (1) Data frame sent
I0128 00:26:12.614281       9 log.go:172] (0xc002e8ab00) (0xc0023e60a0) Stream removed, broadcasting: 5
I0128 00:26:12.614348       9 log.go:172] (0xc002e8ab00) (0xc001714820) Stream removed, broadcasting: 1
I0128 00:26:12.614378       9 log.go:172] (0xc002e8ab00) Go away received
I0128 00:26:12.614901       9 log.go:172] (0xc002e8ab00) (0xc001714820) Stream removed, broadcasting: 1
I0128 00:26:12.614927       9 log.go:172] (0xc002e8ab00) (0xc0021e8280) Stream removed, broadcasting: 3
I0128 00:26:12.614952       9 log.go:172] (0xc002e8ab00) (0xc0023e60a0) Stream removed, broadcasting: 5
Jan 28 00:26:12.614: INFO: Exec stderr: ""
Jan 28 00:26:12.615: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:12.615: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:12.655170       9 log.go:172] (0xc001f20840) (0xc0021e8820) Create stream
I0128 00:26:12.655245       9 log.go:172] (0xc001f20840) (0xc0021e8820) Stream added, broadcasting: 1
I0128 00:26:12.658739       9 log.go:172] (0xc001f20840) Reply frame received for 1
I0128 00:26:12.658877       9 log.go:172] (0xc001f20840) (0xc001bb9040) Create stream
I0128 00:26:12.658891       9 log.go:172] (0xc001f20840) (0xc001bb9040) Stream added, broadcasting: 3
I0128 00:26:12.659760       9 log.go:172] (0xc001f20840) Reply frame received for 3
I0128 00:26:12.659783       9 log.go:172] (0xc001f20840) (0xc0029a17c0) Create stream
I0128 00:26:12.659791       9 log.go:172] (0xc001f20840) (0xc0029a17c0) Stream added, broadcasting: 5
I0128 00:26:12.660553       9 log.go:172] (0xc001f20840) Reply frame received for 5
I0128 00:26:12.730029       9 log.go:172] (0xc001f20840) Data frame received for 3
I0128 00:26:12.730083       9 log.go:172] (0xc001bb9040) (3) Data frame handling
I0128 00:26:12.730152       9 log.go:172] (0xc001bb9040) (3) Data frame sent
I0128 00:26:12.790311       9 log.go:172] (0xc001f20840) (0xc001bb9040) Stream removed, broadcasting: 3
I0128 00:26:12.790428       9 log.go:172] (0xc001f20840) Data frame received for 1
I0128 00:26:12.790442       9 log.go:172] (0xc001f20840) (0xc0029a17c0) Stream removed, broadcasting: 5
I0128 00:26:12.790472       9 log.go:172] (0xc0021e8820) (1) Data frame handling
I0128 00:26:12.790496       9 log.go:172] (0xc0021e8820) (1) Data frame sent
I0128 00:26:12.790511       9 log.go:172] (0xc001f20840) (0xc0021e8820) Stream removed, broadcasting: 1
I0128 00:26:12.790525       9 log.go:172] (0xc001f20840) Go away received
I0128 00:26:12.790724       9 log.go:172] (0xc001f20840) (0xc0021e8820) Stream removed, broadcasting: 1
I0128 00:26:12.790750       9 log.go:172] (0xc001f20840) (0xc001bb9040) Stream removed, broadcasting: 3
I0128 00:26:12.790760       9 log.go:172] (0xc001f20840) (0xc0029a17c0) Stream removed, broadcasting: 5
Jan 28 00:26:12.790: INFO: Exec stderr: ""
Jan 28 00:26:12.790: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:12.790: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:12.823040       9 log.go:172] (0xc0022271e0) (0xc001bb94a0) Create stream
I0128 00:26:12.823112       9 log.go:172] (0xc0022271e0) (0xc001bb94a0) Stream added, broadcasting: 1
I0128 00:26:12.826968       9 log.go:172] (0xc0022271e0) Reply frame received for 1
I0128 00:26:12.827006       9 log.go:172] (0xc0022271e0) (0xc0023e6140) Create stream
I0128 00:26:12.827019       9 log.go:172] (0xc0022271e0) (0xc0023e6140) Stream added, broadcasting: 3
I0128 00:26:12.827998       9 log.go:172] (0xc0022271e0) Reply frame received for 3
I0128 00:26:12.828020       9 log.go:172] (0xc0022271e0) (0xc0029a1860) Create stream
I0128 00:26:12.828031       9 log.go:172] (0xc0022271e0) (0xc0029a1860) Stream added, broadcasting: 5
I0128 00:26:12.828972       9 log.go:172] (0xc0022271e0) Reply frame received for 5
I0128 00:26:12.900543       9 log.go:172] (0xc0022271e0) Data frame received for 3
I0128 00:26:12.900720       9 log.go:172] (0xc0023e6140) (3) Data frame handling
I0128 00:26:12.900776       9 log.go:172] (0xc0023e6140) (3) Data frame sent
I0128 00:26:12.971542       9 log.go:172] (0xc0022271e0) (0xc0023e6140) Stream removed, broadcasting: 3
I0128 00:26:12.971670       9 log.go:172] (0xc0022271e0) Data frame received for 1
I0128 00:26:12.971710       9 log.go:172] (0xc001bb94a0) (1) Data frame handling
I0128 00:26:12.971737       9 log.go:172] (0xc001bb94a0) (1) Data frame sent
I0128 00:26:12.971838       9 log.go:172] (0xc0022271e0) (0xc001bb94a0) Stream removed, broadcasting: 1
I0128 00:26:12.973717       9 log.go:172] (0xc0022271e0) (0xc0029a1860) Stream removed, broadcasting: 5
I0128 00:26:12.973883       9 log.go:172] (0xc0022271e0) (0xc001bb94a0) Stream removed, broadcasting: 1
I0128 00:26:12.973908       9 log.go:172] (0xc0022271e0) (0xc0023e6140) Stream removed, broadcasting: 3
I0128 00:26:12.973928       9 log.go:172] (0xc0022271e0) (0xc0029a1860) Stream removed, broadcasting: 5
Jan 28 00:26:12.973: INFO: Exec stderr: ""
I0128 00:26:12.974049       9 log.go:172] (0xc0022271e0) Go away received
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 28 00:26:12.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:12.974: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:13.030540       9 log.go:172] (0xc002227760) (0xc001bb9860) Create stream
I0128 00:26:13.030722       9 log.go:172] (0xc002227760) (0xc001bb9860) Stream added, broadcasting: 1
I0128 00:26:13.037261       9 log.go:172] (0xc002227760) Reply frame received for 1
I0128 00:26:13.037450       9 log.go:172] (0xc002227760) (0xc0017148c0) Create stream
I0128 00:26:13.037496       9 log.go:172] (0xc002227760) (0xc0017148c0) Stream added, broadcasting: 3
I0128 00:26:13.039443       9 log.go:172] (0xc002227760) Reply frame received for 3
I0128 00:26:13.039479       9 log.go:172] (0xc002227760) (0xc0029a1900) Create stream
I0128 00:26:13.039510       9 log.go:172] (0xc002227760) (0xc0029a1900) Stream added, broadcasting: 5
I0128 00:26:13.041277       9 log.go:172] (0xc002227760) Reply frame received for 5
I0128 00:26:13.124284       9 log.go:172] (0xc002227760) Data frame received for 3
I0128 00:26:13.124347       9 log.go:172] (0xc0017148c0) (3) Data frame handling
I0128 00:26:13.124387       9 log.go:172] (0xc0017148c0) (3) Data frame sent
I0128 00:26:13.192072       9 log.go:172] (0xc002227760) Data frame received for 1
I0128 00:26:13.192114       9 log.go:172] (0xc002227760) (0xc0029a1900) Stream removed, broadcasting: 5
I0128 00:26:13.192150       9 log.go:172] (0xc001bb9860) (1) Data frame handling
I0128 00:26:13.192158       9 log.go:172] (0xc001bb9860) (1) Data frame sent
I0128 00:26:13.192171       9 log.go:172] (0xc002227760) (0xc0017148c0) Stream removed, broadcasting: 3
I0128 00:26:13.192194       9 log.go:172] (0xc002227760) (0xc001bb9860) Stream removed, broadcasting: 1
I0128 00:26:13.192353       9 log.go:172] (0xc002227760) (0xc001bb9860) Stream removed, broadcasting: 1
I0128 00:26:13.192363       9 log.go:172] (0xc002227760) (0xc0017148c0) Stream removed, broadcasting: 3
I0128 00:26:13.192371       9 log.go:172] (0xc002227760) (0xc0029a1900) Stream removed, broadcasting: 5
Jan 28 00:26:13.192: INFO: Exec stderr: ""
I0128 00:26:13.192446       9 log.go:172] (0xc002227760) Go away received
Jan 28 00:26:13.192: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:13.192: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:13.229491       9 log.go:172] (0xc002227d90) (0xc001bb9b80) Create stream
I0128 00:26:13.229585       9 log.go:172] (0xc002227d90) (0xc001bb9b80) Stream added, broadcasting: 1
I0128 00:26:13.231912       9 log.go:172] (0xc002227d90) Reply frame received for 1
I0128 00:26:13.231936       9 log.go:172] (0xc002227d90) (0xc001bb9d60) Create stream
I0128 00:26:13.231943       9 log.go:172] (0xc002227d90) (0xc001bb9d60) Stream added, broadcasting: 3
I0128 00:26:13.232812       9 log.go:172] (0xc002227d90) Reply frame received for 3
I0128 00:26:13.232837       9 log.go:172] (0xc002227d90) (0xc0029a19a0) Create stream
I0128 00:26:13.232852       9 log.go:172] (0xc002227d90) (0xc0029a19a0) Stream added, broadcasting: 5
I0128 00:26:13.233758       9 log.go:172] (0xc002227d90) Reply frame received for 5
I0128 00:26:13.289931       9 log.go:172] (0xc002227d90) Data frame received for 3
I0128 00:26:13.290006       9 log.go:172] (0xc001bb9d60) (3) Data frame handling
I0128 00:26:13.290031       9 log.go:172] (0xc001bb9d60) (3) Data frame sent
I0128 00:26:13.350893       9 log.go:172] (0xc002227d90) Data frame received for 1
I0128 00:26:13.350931       9 log.go:172] (0xc002227d90) (0xc001bb9d60) Stream removed, broadcasting: 3
I0128 00:26:13.350989       9 log.go:172] (0xc001bb9b80) (1) Data frame handling
I0128 00:26:13.351021       9 log.go:172] (0xc001bb9b80) (1) Data frame sent
I0128 00:26:13.351033       9 log.go:172] (0xc002227d90) (0xc0029a19a0) Stream removed, broadcasting: 5
I0128 00:26:13.351050       9 log.go:172] (0xc002227d90) (0xc001bb9b80) Stream removed, broadcasting: 1
I0128 00:26:13.351059       9 log.go:172] (0xc002227d90) Go away received
I0128 00:26:13.351296       9 log.go:172] (0xc002227d90) (0xc001bb9b80) Stream removed, broadcasting: 1
I0128 00:26:13.351310       9 log.go:172] (0xc002227d90) (0xc001bb9d60) Stream removed, broadcasting: 3
I0128 00:26:13.351315       9 log.go:172] (0xc002227d90) (0xc0029a19a0) Stream removed, broadcasting: 5
Jan 28 00:26:13.351: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 28 00:26:13.351: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:13.351: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:13.426195       9 log.go:172] (0xc00264b290) (0xc0029a1b80) Create stream
I0128 00:26:13.426241       9 log.go:172] (0xc00264b290) (0xc0029a1b80) Stream added, broadcasting: 1
I0128 00:26:13.429881       9 log.go:172] (0xc00264b290) Reply frame received for 1
I0128 00:26:13.429922       9 log.go:172] (0xc00264b290) (0xc0023e6500) Create stream
I0128 00:26:13.429930       9 log.go:172] (0xc00264b290) (0xc0023e6500) Stream added, broadcasting: 3
I0128 00:26:13.430982       9 log.go:172] (0xc00264b290) Reply frame received for 3
I0128 00:26:13.431038       9 log.go:172] (0xc00264b290) (0xc0018023c0) Create stream
I0128 00:26:13.431053       9 log.go:172] (0xc00264b290) (0xc0018023c0) Stream added, broadcasting: 5
I0128 00:26:13.431974       9 log.go:172] (0xc00264b290) Reply frame received for 5
I0128 00:26:13.500627       9 log.go:172] (0xc00264b290) Data frame received for 3
I0128 00:26:13.500665       9 log.go:172] (0xc0023e6500) (3) Data frame handling
I0128 00:26:13.500687       9 log.go:172] (0xc0023e6500) (3) Data frame sent
I0128 00:26:13.605613       9 log.go:172] (0xc00264b290) (0xc0018023c0) Stream removed, broadcasting: 5
I0128 00:26:13.606216       9 log.go:172] (0xc00264b290) (0xc0023e6500) Stream removed, broadcasting: 3
I0128 00:26:13.606349       9 log.go:172] (0xc00264b290) Data frame received for 1
I0128 00:26:13.606387       9 log.go:172] (0xc0029a1b80) (1) Data frame handling
I0128 00:26:13.606428       9 log.go:172] (0xc0029a1b80) (1) Data frame sent
I0128 00:26:13.606449       9 log.go:172] (0xc00264b290) (0xc0029a1b80) Stream removed, broadcasting: 1
I0128 00:26:13.606481       9 log.go:172] (0xc00264b290) Go away received
I0128 00:26:13.607241       9 log.go:172] (0xc00264b290) (0xc0029a1b80) Stream removed, broadcasting: 1
I0128 00:26:13.607325       9 log.go:172] (0xc00264b290) (0xc0023e6500) Stream removed, broadcasting: 3
I0128 00:26:13.607336       9 log.go:172] (0xc00264b290) (0xc0018023c0) Stream removed, broadcasting: 5
Jan 28 00:26:13.607: INFO: Exec stderr: ""
Jan 28 00:26:13.607: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:13.607: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:13.665746       9 log.go:172] (0xc00264b8c0) (0xc0029a1d60) Create stream
I0128 00:26:13.665945       9 log.go:172] (0xc00264b8c0) (0xc0029a1d60) Stream added, broadcasting: 1
I0128 00:26:13.671275       9 log.go:172] (0xc00264b8c0) Reply frame received for 1
I0128 00:26:13.671318       9 log.go:172] (0xc00264b8c0) (0xc0021e8aa0) Create stream
I0128 00:26:13.671329       9 log.go:172] (0xc00264b8c0) (0xc0021e8aa0) Stream added, broadcasting: 3
I0128 00:26:13.672408       9 log.go:172] (0xc00264b8c0) Reply frame received for 3
I0128 00:26:13.672467       9 log.go:172] (0xc00264b8c0) (0xc0021e8be0) Create stream
I0128 00:26:13.672480       9 log.go:172] (0xc00264b8c0) (0xc0021e8be0) Stream added, broadcasting: 5
I0128 00:26:13.673689       9 log.go:172] (0xc00264b8c0) Reply frame received for 5
I0128 00:26:13.752565       9 log.go:172] (0xc00264b8c0) Data frame received for 3
I0128 00:26:13.752631       9 log.go:172] (0xc0021e8aa0) (3) Data frame handling
I0128 00:26:13.752667       9 log.go:172] (0xc0021e8aa0) (3) Data frame sent
I0128 00:26:13.851639       9 log.go:172] (0xc00264b8c0) Data frame received for 1
I0128 00:26:13.851737       9 log.go:172] (0xc0029a1d60) (1) Data frame handling
I0128 00:26:13.851759       9 log.go:172] (0xc0029a1d60) (1) Data frame sent
I0128 00:26:13.852619       9 log.go:172] (0xc00264b8c0) (0xc0029a1d60) Stream removed, broadcasting: 1
I0128 00:26:13.853531       9 log.go:172] (0xc00264b8c0) (0xc0021e8be0) Stream removed, broadcasting: 5
I0128 00:26:13.853611       9 log.go:172] (0xc00264b8c0) (0xc0021e8aa0) Stream removed, broadcasting: 3
I0128 00:26:13.853756       9 log.go:172] (0xc00264b8c0) (0xc0029a1d60) Stream removed, broadcasting: 1
I0128 00:26:13.853787       9 log.go:172] (0xc00264b8c0) (0xc0021e8aa0) Stream removed, broadcasting: 3
I0128 00:26:13.853804       9 log.go:172] (0xc00264b8c0) (0xc0021e8be0) Stream removed, broadcasting: 5
Jan 28 00:26:13.853: INFO: Exec stderr: ""
I0128 00:26:13.854092       9 log.go:172] (0xc00264b8c0) Go away received
Jan 28 00:26:13.854: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:13.854: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:13.923918       9 log.go:172] (0xc001f20bb0) (0xc0021e90e0) Create stream
I0128 00:26:13.924092       9 log.go:172] (0xc001f20bb0) (0xc0021e90e0) Stream added, broadcasting: 1
I0128 00:26:13.929040       9 log.go:172] (0xc001f20bb0) Reply frame received for 1
I0128 00:26:13.929145       9 log.go:172] (0xc001f20bb0) (0xc001802b40) Create stream
I0128 00:26:13.929178       9 log.go:172] (0xc001f20bb0) (0xc001802b40) Stream added, broadcasting: 3
I0128 00:26:13.930581       9 log.go:172] (0xc001f20bb0) Reply frame received for 3
I0128 00:26:13.930623       9 log.go:172] (0xc001f20bb0) (0xc0023e6640) Create stream
I0128 00:26:13.930639       9 log.go:172] (0xc001f20bb0) (0xc0023e6640) Stream added, broadcasting: 5
I0128 00:26:13.931455       9 log.go:172] (0xc001f20bb0) Reply frame received for 5
I0128 00:26:14.018207       9 log.go:172] (0xc001f20bb0) Data frame received for 3
I0128 00:26:14.018318       9 log.go:172] (0xc001802b40) (3) Data frame handling
I0128 00:26:14.018428       9 log.go:172] (0xc001802b40) (3) Data frame sent
I0128 00:26:14.110806       9 log.go:172] (0xc001f20bb0) (0xc001802b40) Stream removed, broadcasting: 3
I0128 00:26:14.110947       9 log.go:172] (0xc001f20bb0) Data frame received for 1
I0128 00:26:14.110980       9 log.go:172] (0xc0021e90e0) (1) Data frame handling
I0128 00:26:14.111001       9 log.go:172] (0xc0021e90e0) (1) Data frame sent
I0128 00:26:14.111013       9 log.go:172] (0xc001f20bb0) (0xc0021e90e0) Stream removed, broadcasting: 1
I0128 00:26:14.111312       9 log.go:172] (0xc001f20bb0) (0xc0023e6640) Stream removed, broadcasting: 5
I0128 00:26:14.111383       9 log.go:172] (0xc001f20bb0) (0xc0021e90e0) Stream removed, broadcasting: 1
I0128 00:26:14.111399       9 log.go:172] (0xc001f20bb0) (0xc001802b40) Stream removed, broadcasting: 3
I0128 00:26:14.111416       9 log.go:172] (0xc001f20bb0) (0xc0023e6640) Stream removed, broadcasting: 5
I0128 00:26:14.111811       9 log.go:172] (0xc001f20bb0) Go away received
Jan 28 00:26:14.112: INFO: Exec stderr: ""
Jan 28 00:26:14.112: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5775 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 00:26:14.112: INFO: >>> kubeConfig: /root/.kube/config
I0128 00:26:14.171351       9 log.go:172] (0xc001f21130) (0xc0021e94a0) Create stream
I0128 00:26:14.171388       9 log.go:172] (0xc001f21130) (0xc0021e94a0) Stream added, broadcasting: 1
I0128 00:26:14.173425       9 log.go:172] (0xc001f21130) Reply frame received for 1
I0128 00:26:14.173447       9 log.go:172] (0xc001f21130) (0xc001714960) Create stream
I0128 00:26:14.173455       9 log.go:172] (0xc001f21130) (0xc001714960) Stream added, broadcasting: 3
I0128 00:26:14.174390       9 log.go:172] (0xc001f21130) Reply frame received for 3
I0128 00:26:14.174412       9 log.go:172] (0xc001f21130) (0xc0029a1e00) Create stream
I0128 00:26:14.174418       9 log.go:172] (0xc001f21130) (0xc0029a1e00) Stream added, broadcasting: 5
I0128 00:26:14.175271       9 log.go:172] (0xc001f21130) Reply frame received for 5
I0128 00:26:14.230833       9 log.go:172] (0xc001f21130) Data frame received for 3
I0128 00:26:14.230882       9 log.go:172] (0xc001714960) (3) Data frame handling
I0128 00:26:14.230900       9 log.go:172] (0xc001714960) (3) Data frame sent
I0128 00:26:14.287317       9 log.go:172] (0xc001f21130) (0xc0029a1e00) Stream removed, broadcasting: 5
I0128 00:26:14.287395       9 log.go:172] (0xc001f21130) Data frame received for 1
I0128 00:26:14.287414       9 log.go:172] (0xc001f21130) (0xc001714960) Stream removed, broadcasting: 3
I0128 00:26:14.287443       9 log.go:172] (0xc0021e94a0) (1) Data frame handling
I0128 00:26:14.287465       9 log.go:172] (0xc0021e94a0) (1) Data frame sent
I0128 00:26:14.287473       9 log.go:172] (0xc001f21130) (0xc0021e94a0) Stream removed, broadcasting: 1
I0128 00:26:14.287489       9 log.go:172] (0xc001f21130) Go away received
I0128 00:26:14.287962       9 log.go:172] (0xc001f21130) (0xc0021e94a0) Stream removed, broadcasting: 1
I0128 00:26:14.287974       9 log.go:172] (0xc001f21130) (0xc001714960) Stream removed, broadcasting: 3
I0128 00:26:14.287992       9 log.go:172] (0xc001f21130) (0xc0029a1e00) Stream removed, broadcasting: 5
Jan 28 00:26:14.288: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:26:14.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-5775" for this suite.

• [SLOW TEST:28.491 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":111,"skipped":1987,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:26:14.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-9446/configmap-test-671c12be-90a6-4dce-b554-2c0b2059bab1
STEP: Creating a pod to test consume configMaps
Jan 28 00:26:14.525: INFO: Waiting up to 5m0s for pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353" in namespace "configmap-9446" to be "success or failure"
Jan 28 00:26:14.541: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353": Phase="Pending", Reason="", readiness=false. Elapsed: 15.256203ms
Jan 28 00:26:16.550: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024651015s
Jan 28 00:26:18.564: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038467944s
Jan 28 00:26:20.571: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045608257s
Jan 28 00:26:22.584: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058198972s
STEP: Saw pod success
Jan 28 00:26:22.584: INFO: Pod "pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353" satisfied condition "success or failure"
Jan 28 00:26:22.588: INFO: Trying to get logs from node jerma-node pod pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353 container env-test: 
STEP: delete the pod
Jan 28 00:26:22.670: INFO: Waiting for pod pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353 to disappear
Jan 28 00:26:22.692: INFO: Pod pod-configmaps-600c8969-a779-4f03-a4f8-d93fac262353 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:26:22.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9446" for this suite.

• [SLOW TEST:8.440 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":2002,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:26:22.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0128 00:26:36.670188       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 00:26:36.670: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:26:36.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5528" for this suite.

• [SLOW TEST:17.294 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":113,"skipped":2027,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:26:40.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:26:42.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1944
I0128 00:26:43.476216       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1944, replica count: 1
I0128 00:26:44.527077       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:45.528044       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:46.528892       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:47.529556       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:48.530330       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:49.530865       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:50.531411       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:51.531785       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:52.532319       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:53.532885       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:54.533433       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:55.533838       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:56.534335       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:57.534755       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:58.535399       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:26:59.536185       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:27:00.536665       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 00:27:00.670: INFO: Created: latency-svc-c4w84
Jan 28 00:27:00.684: INFO: Got endpoints: latency-svc-c4w84 [46.864358ms]
Jan 28 00:27:00.756: INFO: Created: latency-svc-ppmwm
Jan 28 00:27:00.809: INFO: Got endpoints: latency-svc-ppmwm [124.411026ms]
Jan 28 00:27:00.813: INFO: Created: latency-svc-rgwcz
Jan 28 00:27:00.816: INFO: Got endpoints: latency-svc-rgwcz [132.613212ms]
Jan 28 00:27:00.885: INFO: Created: latency-svc-nppx9
Jan 28 00:27:00.906: INFO: Got endpoints: latency-svc-nppx9 [222.156403ms]
Jan 28 00:27:00.940: INFO: Created: latency-svc-jz2p4
Jan 28 00:27:00.951: INFO: Got endpoints: latency-svc-jz2p4 [266.134853ms]
Jan 28 00:27:01.038: INFO: Created: latency-svc-5hxkg
Jan 28 00:27:01.043: INFO: Got endpoints: latency-svc-5hxkg [358.454806ms]
Jan 28 00:27:01.083: INFO: Created: latency-svc-8v28l
Jan 28 00:27:01.108: INFO: Got endpoints: latency-svc-8v28l [423.1538ms]
Jan 28 00:27:01.212: INFO: Created: latency-svc-rnsz7
Jan 28 00:27:01.220: INFO: Got endpoints: latency-svc-rnsz7 [535.573251ms]
Jan 28 00:27:01.252: INFO: Created: latency-svc-bdn8t
Jan 28 00:27:01.257: INFO: Got endpoints: latency-svc-bdn8t [572.537603ms]
Jan 28 00:27:01.290: INFO: Created: latency-svc-2tpmt
Jan 28 00:27:01.300: INFO: Got endpoints: latency-svc-2tpmt [615.89409ms]
Jan 28 00:27:01.416: INFO: Created: latency-svc-m8x4p
Jan 28 00:27:01.425: INFO: Got endpoints: latency-svc-m8x4p [740.173727ms]
Jan 28 00:27:01.454: INFO: Created: latency-svc-vqsth
Jan 28 00:27:01.466: INFO: Got endpoints: latency-svc-vqsth [781.400244ms]
Jan 28 00:27:01.485: INFO: Created: latency-svc-7nfbd
Jan 28 00:27:01.563: INFO: Got endpoints: latency-svc-7nfbd [878.35366ms]
Jan 28 00:27:01.568: INFO: Created: latency-svc-6s7b4
Jan 28 00:27:01.593: INFO: Got endpoints: latency-svc-6s7b4 [907.955899ms]
Jan 28 00:27:01.642: INFO: Created: latency-svc-z4b2j
Jan 28 00:27:01.651: INFO: Got endpoints: latency-svc-z4b2j [966.317607ms]
Jan 28 00:27:01.717: INFO: Created: latency-svc-rgqcq
Jan 28 00:27:01.724: INFO: Got endpoints: latency-svc-rgqcq [131.543266ms]
Jan 28 00:27:01.750: INFO: Created: latency-svc-ht4qj
Jan 28 00:27:01.760: INFO: Got endpoints: latency-svc-ht4qj [1.076219407s]
Jan 28 00:27:01.781: INFO: Created: latency-svc-j4dxf
Jan 28 00:27:01.793: INFO: Got endpoints: latency-svc-j4dxf [984.349659ms]
Jan 28 00:27:01.813: INFO: Created: latency-svc-2zj6s
Jan 28 00:27:01.862: INFO: Got endpoints: latency-svc-2zj6s [1.045061869s]
Jan 28 00:27:01.882: INFO: Created: latency-svc-5lz6p
Jan 28 00:27:01.890: INFO: Got endpoints: latency-svc-5lz6p [983.151165ms]
Jan 28 00:27:01.912: INFO: Created: latency-svc-d99qf
Jan 28 00:27:01.920: INFO: Got endpoints: latency-svc-d99qf [968.604796ms]
Jan 28 00:27:01.973: INFO: Created: latency-svc-nz8qp
Jan 28 00:27:02.063: INFO: Got endpoints: latency-svc-nz8qp [1.019910171s]
Jan 28 00:27:02.077: INFO: Created: latency-svc-krjbz
Jan 28 00:27:02.106: INFO: Got endpoints: latency-svc-krjbz [997.956113ms]
Jan 28 00:27:02.114: INFO: Created: latency-svc-fqm75
Jan 28 00:27:02.139: INFO: Got endpoints: latency-svc-fqm75 [918.337275ms]
Jan 28 00:27:02.152: INFO: Created: latency-svc-dkntq
Jan 28 00:27:02.152: INFO: Got endpoints: latency-svc-dkntq [894.984728ms]
Jan 28 00:27:02.312: INFO: Created: latency-svc-jdcnv
Jan 28 00:27:02.333: INFO: Got endpoints: latency-svc-jdcnv [1.033074836s]
Jan 28 00:27:02.551: INFO: Created: latency-svc-l456b
Jan 28 00:27:02.571: INFO: Got endpoints: latency-svc-l456b [1.146392497s]
Jan 28 00:27:02.639: INFO: Created: latency-svc-dqx85
Jan 28 00:27:02.642: INFO: Got endpoints: latency-svc-dqx85 [1.175718983s]
Jan 28 00:27:02.822: INFO: Created: latency-svc-9t8hm
Jan 28 00:27:02.977: INFO: Got endpoints: latency-svc-9t8hm [1.414469171s]
Jan 28 00:27:03.031: INFO: Created: latency-svc-hm4mm
Jan 28 00:27:03.044: INFO: Got endpoints: latency-svc-hm4mm [1.393367164s]
Jan 28 00:27:03.181: INFO: Created: latency-svc-bhdkn
Jan 28 00:27:03.223: INFO: Created: latency-svc-hxf8t
Jan 28 00:27:03.223: INFO: Got endpoints: latency-svc-bhdkn [1.498820239s]
Jan 28 00:27:03.238: INFO: Got endpoints: latency-svc-hxf8t [1.477594028s]
Jan 28 00:27:03.328: INFO: Created: latency-svc-6n4r5
Jan 28 00:27:03.335: INFO: Got endpoints: latency-svc-6n4r5 [1.542122441s]
Jan 28 00:27:03.354: INFO: Created: latency-svc-c84zq
Jan 28 00:27:03.366: INFO: Got endpoints: latency-svc-c84zq [1.504656461s]
Jan 28 00:27:03.388: INFO: Created: latency-svc-j9bfh
Jan 28 00:27:03.411: INFO: Got endpoints: latency-svc-j9bfh [1.521687583s]
Jan 28 00:27:03.495: INFO: Created: latency-svc-fx6tx
Jan 28 00:27:03.499: INFO: Got endpoints: latency-svc-fx6tx [1.579196007s]
Jan 28 00:27:03.530: INFO: Created: latency-svc-67lmk
Jan 28 00:27:03.534: INFO: Got endpoints: latency-svc-67lmk [1.470119774s]
Jan 28 00:27:03.562: INFO: Created: latency-svc-mkgn5
Jan 28 00:27:03.563: INFO: Got endpoints: latency-svc-mkgn5 [1.457587253s]
Jan 28 00:27:03.660: INFO: Created: latency-svc-jmhpg
Jan 28 00:27:03.661: INFO: Got endpoints: latency-svc-jmhpg [1.522070169s]
Jan 28 00:27:03.702: INFO: Created: latency-svc-p9czp
Jan 28 00:27:03.710: INFO: Got endpoints: latency-svc-p9czp [1.557924083s]
Jan 28 00:27:03.803: INFO: Created: latency-svc-p5xxx
Jan 28 00:27:03.812: INFO: Got endpoints: latency-svc-p5xxx [1.478090705s]
Jan 28 00:27:03.851: INFO: Created: latency-svc-5jnp5
Jan 28 00:27:03.855: INFO: Got endpoints: latency-svc-5jnp5 [1.283211753s]
Jan 28 00:27:03.900: INFO: Created: latency-svc-vltd7
Jan 28 00:27:03.990: INFO: Created: latency-svc-7h4kl
Jan 28 00:27:04.000: INFO: Got endpoints: latency-svc-vltd7 [1.357660597s]
Jan 28 00:27:04.001: INFO: Got endpoints: latency-svc-7h4kl [1.02322583s]
Jan 28 00:27:04.039: INFO: Created: latency-svc-xpdjl
Jan 28 00:27:04.042: INFO: Got endpoints: latency-svc-xpdjl [997.749514ms]
Jan 28 00:27:04.066: INFO: Created: latency-svc-h75tq
Jan 28 00:27:04.070: INFO: Got endpoints: latency-svc-h75tq [846.900266ms]
Jan 28 00:27:04.168: INFO: Created: latency-svc-zqgcl
Jan 28 00:27:04.194: INFO: Got endpoints: latency-svc-zqgcl [956.096801ms]
Jan 28 00:27:04.250: INFO: Created: latency-svc-lc848
Jan 28 00:27:04.255: INFO: Got endpoints: latency-svc-lc848 [919.054053ms]
Jan 28 00:27:04.345: INFO: Created: latency-svc-9wwgl
Jan 28 00:27:04.369: INFO: Got endpoints: latency-svc-9wwgl [1.002214381s]
Jan 28 00:27:04.377: INFO: Created: latency-svc-7pnm2
Jan 28 00:27:04.398: INFO: Got endpoints: latency-svc-7pnm2 [986.60009ms]
Jan 28 00:27:04.432: INFO: Created: latency-svc-nsnbt
Jan 28 00:27:04.440: INFO: Got endpoints: latency-svc-nsnbt [940.71045ms]
Jan 28 00:27:04.558: INFO: Created: latency-svc-k98tf
Jan 28 00:27:04.561: INFO: Got endpoints: latency-svc-k98tf [1.026770065s]
Jan 28 00:27:04.593: INFO: Created: latency-svc-v82f4
Jan 28 00:27:04.626: INFO: Got endpoints: latency-svc-v82f4 [1.062491109s]
Jan 28 00:27:04.645: INFO: Created: latency-svc-8sjv6
Jan 28 00:27:04.728: INFO: Got endpoints: latency-svc-8sjv6 [1.067482051s]
Jan 28 00:27:04.738: INFO: Created: latency-svc-fr4pg
Jan 28 00:27:04.743: INFO: Got endpoints: latency-svc-fr4pg [1.032939542s]
Jan 28 00:27:04.774: INFO: Created: latency-svc-7l6xz
Jan 28 00:27:04.780: INFO: Got endpoints: latency-svc-7l6xz [968.310664ms]
Jan 28 00:27:04.825: INFO: Created: latency-svc-tvs4f
Jan 28 00:27:04.896: INFO: Got endpoints: latency-svc-tvs4f [1.040960377s]
Jan 28 00:27:04.914: INFO: Created: latency-svc-9hjmk
Jan 28 00:27:04.925: INFO: Got endpoints: latency-svc-9hjmk [925.031553ms]
Jan 28 00:27:04.939: INFO: Created: latency-svc-xch4v
Jan 28 00:27:04.944: INFO: Got endpoints: latency-svc-xch4v [943.025214ms]
Jan 28 00:27:04.974: INFO: Created: latency-svc-rqskx
Jan 28 00:27:05.084: INFO: Got endpoints: latency-svc-rqskx [1.041102646s]
Jan 28 00:27:05.084: INFO: Created: latency-svc-lv25k
Jan 28 00:27:05.089: INFO: Got endpoints: latency-svc-lv25k [1.019056885s]
Jan 28 00:27:05.116: INFO: Created: latency-svc-zqg68
Jan 28 00:27:05.140: INFO: Got endpoints: latency-svc-zqg68 [946.456641ms]
Jan 28 00:27:05.181: INFO: Created: latency-svc-99mnc
Jan 28 00:27:05.249: INFO: Got endpoints: latency-svc-99mnc [994.615818ms]
Jan 28 00:27:05.264: INFO: Created: latency-svc-z87nl
Jan 28 00:27:05.267: INFO: Got endpoints: latency-svc-z87nl [897.945613ms]
Jan 28 00:27:05.319: INFO: Created: latency-svc-zh6rw
Jan 28 00:27:05.327: INFO: Got endpoints: latency-svc-zh6rw [928.835158ms]
Jan 28 00:27:05.425: INFO: Created: latency-svc-zjhds
Jan 28 00:27:05.461: INFO: Got endpoints: latency-svc-zjhds [1.021044343s]
Jan 28 00:27:05.466: INFO: Created: latency-svc-kzvg6
Jan 28 00:27:05.487: INFO: Got endpoints: latency-svc-kzvg6 [926.289573ms]
Jan 28 00:27:05.646: INFO: Created: latency-svc-v6bgr
Jan 28 00:27:05.661: INFO: Got endpoints: latency-svc-v6bgr [1.034996179s]
Jan 28 00:27:05.709: INFO: Created: latency-svc-xwxs8
Jan 28 00:27:05.730: INFO: Got endpoints: latency-svc-xwxs8 [1.001175445s]
Jan 28 00:27:05.802: INFO: Created: latency-svc-wgc22
Jan 28 00:27:05.810: INFO: Got endpoints: latency-svc-wgc22 [1.066699618s]
Jan 28 00:27:05.854: INFO: Created: latency-svc-ptxc5
Jan 28 00:27:05.877: INFO: Got endpoints: latency-svc-ptxc5 [1.096569587s]
Jan 28 00:27:05.881: INFO: Created: latency-svc-m8cf2
Jan 28 00:27:05.908: INFO: Got endpoints: latency-svc-m8cf2 [1.012066298s]
Jan 28 00:27:05.969: INFO: Created: latency-svc-77rgh
Jan 28 00:27:05.976: INFO: Got endpoints: latency-svc-77rgh [1.050610733s]
Jan 28 00:27:06.011: INFO: Created: latency-svc-rtbss
Jan 28 00:27:06.029: INFO: Got endpoints: latency-svc-rtbss [1.084593626s]
Jan 28 00:27:06.054: INFO: Created: latency-svc-dfkkr
Jan 28 00:27:06.064: INFO: Got endpoints: latency-svc-dfkkr [980.377854ms]
Jan 28 00:27:06.159: INFO: Created: latency-svc-5xnkt
Jan 28 00:27:06.192: INFO: Got endpoints: latency-svc-5xnkt [1.102338486s]
Jan 28 00:27:06.337: INFO: Created: latency-svc-6cbz2
Jan 28 00:27:06.368: INFO: Created: latency-svc-zcrr7
Jan 28 00:27:06.369: INFO: Got endpoints: latency-svc-6cbz2 [1.228318778s]
Jan 28 00:27:06.376: INFO: Got endpoints: latency-svc-zcrr7 [1.126987891s]
Jan 28 00:27:06.419: INFO: Created: latency-svc-g89kn
Jan 28 00:27:06.540: INFO: Got endpoints: latency-svc-g89kn [1.273413643s]
Jan 28 00:27:06.553: INFO: Created: latency-svc-mc8hn
Jan 28 00:27:06.562: INFO: Got endpoints: latency-svc-mc8hn [1.234436423s]
Jan 28 00:27:06.601: INFO: Created: latency-svc-j4kjq
Jan 28 00:27:06.621: INFO: Got endpoints: latency-svc-j4kjq [1.159982396s]
Jan 28 00:27:06.706: INFO: Created: latency-svc-2t452
Jan 28 00:27:06.717: INFO: Got endpoints: latency-svc-2t452 [1.228726307s]
Jan 28 00:27:06.743: INFO: Created: latency-svc-8lxhr
Jan 28 00:27:06.755: INFO: Got endpoints: latency-svc-8lxhr [1.093689424s]
Jan 28 00:27:06.776: INFO: Created: latency-svc-7hz7h
Jan 28 00:27:06.782: INFO: Got endpoints: latency-svc-7hz7h [1.051845583s]
Jan 28 00:27:06.874: INFO: Created: latency-svc-xhvgj
Jan 28 00:27:06.898: INFO: Got endpoints: latency-svc-xhvgj [1.088075974s]
Jan 28 00:27:06.902: INFO: Created: latency-svc-l6jfn
Jan 28 00:27:06.908: INFO: Got endpoints: latency-svc-l6jfn [1.030911845s]
Jan 28 00:27:06.929: INFO: Created: latency-svc-wwrm5
Jan 28 00:27:06.939: INFO: Got endpoints: latency-svc-wwrm5 [1.030451219s]
Jan 28 00:27:07.033: INFO: Created: latency-svc-lvbj2
Jan 28 00:27:07.038: INFO: Got endpoints: latency-svc-lvbj2 [1.061255008s]
Jan 28 00:27:07.041: INFO: Created: latency-svc-w4w7k
Jan 28 00:27:07.065: INFO: Got endpoints: latency-svc-w4w7k [1.035996585s]
Jan 28 00:27:07.116: INFO: Created: latency-svc-bxcvl
Jan 28 00:27:07.124: INFO: Got endpoints: latency-svc-bxcvl [1.060292533s]
Jan 28 00:27:07.218: INFO: Created: latency-svc-sdcrx
Jan 28 00:27:07.223: INFO: Got endpoints: latency-svc-sdcrx [1.031301947s]
Jan 28 00:27:07.282: INFO: Created: latency-svc-ggqrt
Jan 28 00:27:07.282: INFO: Got endpoints: latency-svc-ggqrt [912.783318ms]
Jan 28 00:27:07.356: INFO: Created: latency-svc-ghfmk
Jan 28 00:27:07.363: INFO: Got endpoints: latency-svc-ghfmk [986.652704ms]
Jan 28 00:27:07.405: INFO: Created: latency-svc-kpgq4
Jan 28 00:27:07.441: INFO: Got endpoints: latency-svc-kpgq4 [900.925135ms]
Jan 28 00:27:07.557: INFO: Created: latency-svc-k5jqk
Jan 28 00:27:07.605: INFO: Got endpoints: latency-svc-k5jqk [1.042591385s]
Jan 28 00:27:07.606: INFO: Created: latency-svc-5hxds
Jan 28 00:27:07.621: INFO: Got endpoints: latency-svc-5hxds [999.293534ms]
Jan 28 00:27:07.652: INFO: Created: latency-svc-gfm8c
Jan 28 00:27:07.694: INFO: Got endpoints: latency-svc-gfm8c [977.425949ms]
Jan 28 00:27:07.752: INFO: Created: latency-svc-vnx55
Jan 28 00:27:07.753: INFO: Got endpoints: latency-svc-vnx55 [998.419032ms]
Jan 28 00:27:07.780: INFO: Created: latency-svc-lm6mw
Jan 28 00:27:07.784: INFO: Got endpoints: latency-svc-lm6mw [1.00222625s]
Jan 28 00:27:07.885: INFO: Created: latency-svc-8fg6j
Jan 28 00:27:07.911: INFO: Got endpoints: latency-svc-8fg6j [1.012853157s]
Jan 28 00:27:07.917: INFO: Created: latency-svc-7dqjx
Jan 28 00:27:07.933: INFO: Got endpoints: latency-svc-7dqjx [1.024558922s]
Jan 28 00:27:07.959: INFO: Created: latency-svc-8qhzh
Jan 28 00:27:08.020: INFO: Got endpoints: latency-svc-8qhzh [1.080893324s]
Jan 28 00:27:08.039: INFO: Created: latency-svc-lsc9m
Jan 28 00:27:08.043: INFO: Got endpoints: latency-svc-lsc9m [1.005173629s]
Jan 28 00:27:08.081: INFO: Created: latency-svc-9mjtd
Jan 28 00:27:08.091: INFO: Got endpoints: latency-svc-9mjtd [1.026658597s]
Jan 28 00:27:08.181: INFO: Created: latency-svc-7jppk
Jan 28 00:27:08.212: INFO: Got endpoints: latency-svc-7jppk [1.087192566s]
Jan 28 00:27:08.212: INFO: Created: latency-svc-wdb9t
Jan 28 00:27:08.241: INFO: Got endpoints: latency-svc-wdb9t [1.017558771s]
Jan 28 00:27:08.243: INFO: Created: latency-svc-cgw8r
Jan 28 00:27:08.254: INFO: Got endpoints: latency-svc-cgw8r [971.879839ms]
Jan 28 00:27:08.361: INFO: Created: latency-svc-c4k72
Jan 28 00:27:08.364: INFO: Got endpoints: latency-svc-c4k72 [1.000377346s]
Jan 28 00:27:08.397: INFO: Created: latency-svc-9tjb5
Jan 28 00:27:08.410: INFO: Got endpoints: latency-svc-9tjb5 [968.506201ms]
Jan 28 00:27:08.437: INFO: Created: latency-svc-99mw8
Jan 28 00:27:08.446: INFO: Got endpoints: latency-svc-99mw8 [840.818438ms]
Jan 28 00:27:08.556: INFO: Created: latency-svc-9sd9c
Jan 28 00:27:08.608: INFO: Got endpoints: latency-svc-9sd9c [987.543503ms]
Jan 28 00:27:08.614: INFO: Created: latency-svc-cbmkl
Jan 28 00:27:08.723: INFO: Got endpoints: latency-svc-cbmkl [1.028753043s]
Jan 28 00:27:08.753: INFO: Created: latency-svc-98rrq
Jan 28 00:27:08.760: INFO: Got endpoints: latency-svc-98rrq [1.006275171s]
Jan 28 00:27:08.818: INFO: Created: latency-svc-59mdj
Jan 28 00:27:08.823: INFO: Got endpoints: latency-svc-59mdj [1.039134696s]
Jan 28 00:27:08.885: INFO: Created: latency-svc-lps66
Jan 28 00:27:08.903: INFO: Created: latency-svc-bxnvh
Jan 28 00:27:08.904: INFO: Got endpoints: latency-svc-lps66 [992.644579ms]
Jan 28 00:27:08.939: INFO: Got endpoints: latency-svc-bxnvh [1.005963765s]
Jan 28 00:27:08.945: INFO: Created: latency-svc-wlwpf
Jan 28 00:27:09.058: INFO: Got endpoints: latency-svc-wlwpf [1.038048298s]
Jan 28 00:27:09.062: INFO: Created: latency-svc-h6vqj
Jan 28 00:27:09.071: INFO: Got endpoints: latency-svc-h6vqj [1.028607706s]
Jan 28 00:27:09.108: INFO: Created: latency-svc-vl8jk
Jan 28 00:27:09.117: INFO: Got endpoints: latency-svc-vl8jk [1.02592327s]
Jan 28 00:27:09.239: INFO: Created: latency-svc-vn8pc
Jan 28 00:27:09.248: INFO: Got endpoints: latency-svc-vn8pc [1.036120356s]
Jan 28 00:27:09.281: INFO: Created: latency-svc-f9kkj
Jan 28 00:27:09.292: INFO: Got endpoints: latency-svc-f9kkj [1.050619229s]
Jan 28 00:27:09.317: INFO: Created: latency-svc-2qr79
Jan 28 00:27:09.320: INFO: Got endpoints: latency-svc-2qr79 [1.065866875s]
Jan 28 00:27:09.424: INFO: Created: latency-svc-9tx9p
Jan 28 00:27:09.428: INFO: Got endpoints: latency-svc-9tx9p [1.064265101s]
Jan 28 00:27:09.496: INFO: Created: latency-svc-bv952
Jan 28 00:27:09.519: INFO: Got endpoints: latency-svc-bv952 [1.109219143s]
Jan 28 00:27:09.531: INFO: Created: latency-svc-4484x
Jan 28 00:27:09.586: INFO: Got endpoints: latency-svc-4484x [1.139899807s]
Jan 28 00:27:09.590: INFO: Created: latency-svc-pgqcm
Jan 28 00:27:09.597: INFO: Got endpoints: latency-svc-pgqcm [988.834513ms]
Jan 28 00:27:09.620: INFO: Created: latency-svc-zt2vt
Jan 28 00:27:09.623: INFO: Got endpoints: latency-svc-zt2vt [899.707226ms]
Jan 28 00:27:09.641: INFO: Created: latency-svc-gjksg
Jan 28 00:27:09.741: INFO: Got endpoints: latency-svc-gjksg [980.868006ms]
Jan 28 00:27:09.756: INFO: Created: latency-svc-k8n9b
Jan 28 00:27:09.777: INFO: Got endpoints: latency-svc-k8n9b [953.867241ms]
Jan 28 00:27:09.804: INFO: Created: latency-svc-jnfsd
Jan 28 00:27:09.808: INFO: Got endpoints: latency-svc-jnfsd [903.181014ms]
Jan 28 00:27:09.849: INFO: Created: latency-svc-m88ff
Jan 28 00:27:09.905: INFO: Got endpoints: latency-svc-m88ff [966.283238ms]
Jan 28 00:27:09.949: INFO: Created: latency-svc-klv9k
Jan 28 00:27:09.964: INFO: Got endpoints: latency-svc-klv9k [905.557205ms]
Jan 28 00:27:09.979: INFO: Created: latency-svc-74mh5
Jan 28 00:27:09.980: INFO: Got endpoints: latency-svc-74mh5 [908.025044ms]
Jan 28 00:27:10.044: INFO: Created: latency-svc-h79gl
Jan 28 00:27:10.044: INFO: Got endpoints: latency-svc-h79gl [926.175523ms]
Jan 28 00:27:10.083: INFO: Created: latency-svc-2rtvs
Jan 28 00:27:10.103: INFO: Got endpoints: latency-svc-2rtvs [854.9923ms]
Jan 28 00:27:10.184: INFO: Created: latency-svc-zkgrj
Jan 28 00:27:10.210: INFO: Got endpoints: latency-svc-zkgrj [918.499301ms]
Jan 28 00:27:10.215: INFO: Created: latency-svc-4ddd2
Jan 28 00:27:10.218: INFO: Got endpoints: latency-svc-4ddd2 [898.640118ms]
Jan 28 00:27:10.254: INFO: Created: latency-svc-225mm
Jan 28 00:27:10.258: INFO: Got endpoints: latency-svc-225mm [830.096465ms]
Jan 28 00:27:10.352: INFO: Created: latency-svc-cdctb
Jan 28 00:27:10.410: INFO: Created: latency-svc-lrmts
Jan 28 00:27:10.410: INFO: Got endpoints: latency-svc-cdctb [890.217259ms]
Jan 28 00:27:10.413: INFO: Got endpoints: latency-svc-lrmts [827.450515ms]
Jan 28 00:27:10.455: INFO: Created: latency-svc-hkhkt
Jan 28 00:27:10.509: INFO: Got endpoints: latency-svc-hkhkt [911.770796ms]
Jan 28 00:27:10.535: INFO: Created: latency-svc-v5kqq
Jan 28 00:27:10.546: INFO: Got endpoints: latency-svc-v5kqq [922.860599ms]
Jan 28 00:27:10.568: INFO: Created: latency-svc-bc5wp
Jan 28 00:27:10.571: INFO: Got endpoints: latency-svc-bc5wp [830.191182ms]
Jan 28 00:27:10.596: INFO: Created: latency-svc-d7w2d
Jan 28 00:27:10.694: INFO: Created: latency-svc-5qr7m
Jan 28 00:27:10.697: INFO: Got endpoints: latency-svc-d7w2d [919.888104ms]
Jan 28 00:27:10.707: INFO: Got endpoints: latency-svc-5qr7m [899.588811ms]
Jan 28 00:27:10.725: INFO: Created: latency-svc-jb2qq
Jan 28 00:27:10.735: INFO: Got endpoints: latency-svc-jb2qq [829.200061ms]
Jan 28 00:27:10.778: INFO: Created: latency-svc-lvc97
Jan 28 00:27:10.791: INFO: Got endpoints: latency-svc-lvc97 [827.112328ms]
Jan 28 00:27:10.885: INFO: Created: latency-svc-2j89k
Jan 28 00:27:10.885: INFO: Got endpoints: latency-svc-2j89k [905.854147ms]
Jan 28 00:27:10.910: INFO: Created: latency-svc-k5x89
Jan 28 00:27:10.937: INFO: Created: latency-svc-8sdhh
Jan 28 00:27:10.937: INFO: Got endpoints: latency-svc-k5x89 [893.662289ms]
Jan 28 00:27:10.952: INFO: Got endpoints: latency-svc-8sdhh [848.460136ms]
Jan 28 00:27:11.026: INFO: Created: latency-svc-244wz
Jan 28 00:27:11.027: INFO: Got endpoints: latency-svc-244wz [816.362177ms]
Jan 28 00:27:11.066: INFO: Created: latency-svc-mcbch
Jan 28 00:27:11.076: INFO: Got endpoints: latency-svc-mcbch [857.16887ms]
Jan 28 00:27:11.101: INFO: Created: latency-svc-2nt2l
Jan 28 00:27:11.114: INFO: Got endpoints: latency-svc-2nt2l [855.471623ms]
Jan 28 00:27:11.192: INFO: Created: latency-svc-dblk2
Jan 28 00:27:11.193: INFO: Got endpoints: latency-svc-dblk2 [783.493314ms]
Jan 28 00:27:11.223: INFO: Created: latency-svc-hlzmq
Jan 28 00:27:11.227: INFO: Got endpoints: latency-svc-hlzmq [814.025242ms]
Jan 28 00:27:11.265: INFO: Created: latency-svc-7fg8h
Jan 28 00:27:11.290: INFO: Created: latency-svc-dkrtn
Jan 28 00:27:11.346: INFO: Got endpoints: latency-svc-7fg8h [836.619039ms]
Jan 28 00:27:11.347: INFO: Got endpoints: latency-svc-dkrtn [800.575678ms]
Jan 28 00:27:11.353: INFO: Created: latency-svc-pgjcf
Jan 28 00:27:11.365: INFO: Got endpoints: latency-svc-pgjcf [793.096745ms]
Jan 28 00:27:11.398: INFO: Created: latency-svc-h8sgm
Jan 28 00:27:11.422: INFO: Got endpoints: latency-svc-h8sgm [724.419229ms]
Jan 28 00:27:11.505: INFO: Created: latency-svc-bhnkp
Jan 28 00:27:11.552: INFO: Got endpoints: latency-svc-bhnkp [844.671686ms]
Jan 28 00:27:11.570: INFO: Created: latency-svc-tstg8
Jan 28 00:27:11.696: INFO: Got endpoints: latency-svc-tstg8 [961.118993ms]
Jan 28 00:27:11.699: INFO: Created: latency-svc-2qrv6
Jan 28 00:27:11.736: INFO: Got endpoints: latency-svc-2qrv6 [944.633672ms]
Jan 28 00:27:11.773: INFO: Created: latency-svc-w782j
Jan 28 00:27:11.788: INFO: Got endpoints: latency-svc-w782j [902.758169ms]
Jan 28 00:27:11.881: INFO: Created: latency-svc-njshg
Jan 28 00:27:11.894: INFO: Got endpoints: latency-svc-njshg [956.953677ms]
Jan 28 00:27:11.952: INFO: Created: latency-svc-7xrzb
Jan 28 00:27:12.038: INFO: Got endpoints: latency-svc-7xrzb [1.086310777s]
Jan 28 00:27:12.194: INFO: Created: latency-svc-pbbx2
Jan 28 00:27:12.245: INFO: Created: latency-svc-fsbzh
Jan 28 00:27:12.245: INFO: Got endpoints: latency-svc-pbbx2 [1.2182658s]
Jan 28 00:27:12.278: INFO: Got endpoints: latency-svc-fsbzh [1.20190427s]
Jan 28 00:27:12.377: INFO: Created: latency-svc-czhjs
Jan 28 00:27:12.391: INFO: Got endpoints: latency-svc-czhjs [1.277113587s]
Jan 28 00:27:12.446: INFO: Created: latency-svc-pcxt8
Jan 28 00:27:12.469: INFO: Got endpoints: latency-svc-pcxt8 [1.275431515s]
Jan 28 00:27:12.595: INFO: Created: latency-svc-g9krs
Jan 28 00:27:12.595: INFO: Got endpoints: latency-svc-g9krs [1.36741213s]
Jan 28 00:27:12.636: INFO: Created: latency-svc-76qsk
Jan 28 00:27:12.640: INFO: Got endpoints: latency-svc-76qsk [1.294491835s]
Jan 28 00:27:12.735: INFO: Created: latency-svc-dw6kj
Jan 28 00:27:12.738: INFO: Got endpoints: latency-svc-dw6kj [1.390845882s]
Jan 28 00:27:12.773: INFO: Created: latency-svc-9ttv7
Jan 28 00:27:12.788: INFO: Got endpoints: latency-svc-9ttv7 [1.423171265s]
Jan 28 00:27:12.895: INFO: Created: latency-svc-6chr7
Jan 28 00:27:12.921: INFO: Got endpoints: latency-svc-6chr7 [1.498940449s]
Jan 28 00:27:12.927: INFO: Created: latency-svc-frnkd
Jan 28 00:27:12.931: INFO: Got endpoints: latency-svc-frnkd [1.378807953s]
Jan 28 00:27:12.984: INFO: Created: latency-svc-876sz
Jan 28 00:27:13.055: INFO: Got endpoints: latency-svc-876sz [1.359414451s]
Jan 28 00:27:13.090: INFO: Created: latency-svc-npfhl
Jan 28 00:27:13.093: INFO: Got endpoints: latency-svc-npfhl [1.356963981s]
Jan 28 00:27:13.108: INFO: Created: latency-svc-28dc5
Jan 28 00:27:13.118: INFO: Got endpoints: latency-svc-28dc5 [1.3293598s]
Jan 28 00:27:13.150: INFO: Created: latency-svc-gk7cd
Jan 28 00:27:13.203: INFO: Got endpoints: latency-svc-gk7cd [1.308519861s]
Jan 28 00:27:13.234: INFO: Created: latency-svc-49h4p
Jan 28 00:27:13.242: INFO: Got endpoints: latency-svc-49h4p [1.203930817s]
Jan 28 00:27:13.285: INFO: Created: latency-svc-qbk9z
Jan 28 00:27:13.295: INFO: Got endpoints: latency-svc-qbk9z [1.049989849s]
Jan 28 00:27:13.376: INFO: Created: latency-svc-xwsz9
Jan 28 00:27:13.383: INFO: Got endpoints: latency-svc-xwsz9 [1.104782984s]
Jan 28 00:27:13.403: INFO: Created: latency-svc-c7szh
Jan 28 00:27:13.420: INFO: Got endpoints: latency-svc-c7szh [1.028091202s]
Jan 28 00:27:13.420: INFO: Created: latency-svc-sdgqv
Jan 28 00:27:13.428: INFO: Got endpoints: latency-svc-sdgqv [958.990935ms]
Jan 28 00:27:13.461: INFO: Created: latency-svc-2twzz
Jan 28 00:27:13.499: INFO: Got endpoints: latency-svc-2twzz [904.407822ms]
Jan 28 00:27:13.542: INFO: Created: latency-svc-5gzz7
Jan 28 00:27:13.583: INFO: Got endpoints: latency-svc-5gzz7 [942.055932ms]
Jan 28 00:27:13.583: INFO: Created: latency-svc-7qcd8
Jan 28 00:27:13.594: INFO: Got endpoints: latency-svc-7qcd8 [855.450627ms]
Jan 28 00:27:13.678: INFO: Created: latency-svc-g5pgb
Jan 28 00:27:13.713: INFO: Got endpoints: latency-svc-g5pgb [925.348947ms]
Jan 28 00:27:13.755: INFO: Created: latency-svc-gttb2
Jan 28 00:27:13.764: INFO: Got endpoints: latency-svc-gttb2 [843.169451ms]
Jan 28 00:27:13.846: INFO: Created: latency-svc-9h2bq
Jan 28 00:27:13.883: INFO: Created: latency-svc-zgjnw
Jan 28 00:27:13.885: INFO: Got endpoints: latency-svc-9h2bq [953.655497ms]
Jan 28 00:27:13.931: INFO: Got endpoints: latency-svc-zgjnw [875.565784ms]
Jan 28 00:27:13.945: INFO: Created: latency-svc-k2vdv
Jan 28 00:27:14.007: INFO: Got endpoints: latency-svc-k2vdv [914.072547ms]
Jan 28 00:27:14.088: INFO: Created: latency-svc-k72ng
Jan 28 00:27:14.240: INFO: Got endpoints: latency-svc-k72ng [1.121341267s]
Jan 28 00:27:14.256: INFO: Created: latency-svc-ttb6p
Jan 28 00:27:14.500: INFO: Got endpoints: latency-svc-ttb6p [1.296975984s]
Jan 28 00:27:14.542: INFO: Created: latency-svc-wjmqq
Jan 28 00:27:14.555: INFO: Got endpoints: latency-svc-wjmqq [1.312931003s]
Jan 28 00:27:14.738: INFO: Created: latency-svc-l8cj7
Jan 28 00:27:14.766: INFO: Got endpoints: latency-svc-l8cj7 [1.470605936s]
Jan 28 00:27:14.827: INFO: Created: latency-svc-k7xlh
Jan 28 00:27:14.953: INFO: Got endpoints: latency-svc-k7xlh [1.570308478s]
Jan 28 00:27:14.965: INFO: Created: latency-svc-jg6zd
Jan 28 00:27:14.968: INFO: Got endpoints: latency-svc-jg6zd [1.548118194s]
Jan 28 00:27:15.120: INFO: Created: latency-svc-xw9mh
Jan 28 00:27:15.166: INFO: Created: latency-svc-nvlqk
Jan 28 00:27:15.167: INFO: Got endpoints: latency-svc-xw9mh [1.738646134s]
Jan 28 00:27:15.167: INFO: Got endpoints: latency-svc-nvlqk [1.668054174s]
Jan 28 00:27:15.213: INFO: Created: latency-svc-pq4qf
Jan 28 00:27:15.285: INFO: Got endpoints: latency-svc-pq4qf [1.701836179s]
Jan 28 00:27:15.285: INFO: Latencies: [124.411026ms 131.543266ms 132.613212ms 222.156403ms 266.134853ms 358.454806ms 423.1538ms 535.573251ms 572.537603ms 615.89409ms 724.419229ms 740.173727ms 781.400244ms 783.493314ms 793.096745ms 800.575678ms 814.025242ms 816.362177ms 827.112328ms 827.450515ms 829.200061ms 830.096465ms 830.191182ms 836.619039ms 840.818438ms 843.169451ms 844.671686ms 846.900266ms 848.460136ms 854.9923ms 855.450627ms 855.471623ms 857.16887ms 875.565784ms 878.35366ms 890.217259ms 893.662289ms 894.984728ms 897.945613ms 898.640118ms 899.588811ms 899.707226ms 900.925135ms 902.758169ms 903.181014ms 904.407822ms 905.557205ms 905.854147ms 907.955899ms 908.025044ms 911.770796ms 912.783318ms 914.072547ms 918.337275ms 918.499301ms 919.054053ms 919.888104ms 922.860599ms 925.031553ms 925.348947ms 926.175523ms 926.289573ms 928.835158ms 940.71045ms 942.055932ms 943.025214ms 944.633672ms 946.456641ms 953.655497ms 953.867241ms 956.096801ms 956.953677ms 958.990935ms 961.118993ms 966.283238ms 966.317607ms 968.310664ms 968.506201ms 968.604796ms 971.879839ms 977.425949ms 980.377854ms 980.868006ms 983.151165ms 984.349659ms 986.60009ms 986.652704ms 987.543503ms 988.834513ms 992.644579ms 994.615818ms 997.749514ms 997.956113ms 998.419032ms 999.293534ms 1.000377346s 1.001175445s 1.002214381s 1.00222625s 1.005173629s 1.005963765s 1.006275171s 1.012066298s 1.012853157s 1.017558771s 1.019056885s 1.019910171s 1.021044343s 1.02322583s 1.024558922s 1.02592327s 1.026658597s 1.026770065s 1.028091202s 1.028607706s 1.028753043s 1.030451219s 1.030911845s 1.031301947s 1.032939542s 1.033074836s 1.034996179s 1.035996585s 1.036120356s 1.038048298s 1.039134696s 1.040960377s 1.041102646s 1.042591385s 1.045061869s 1.049989849s 1.050610733s 1.050619229s 1.051845583s 1.060292533s 1.061255008s 1.062491109s 1.064265101s 1.065866875s 1.066699618s 1.067482051s 1.076219407s 1.080893324s 1.084593626s 1.086310777s 1.087192566s 1.088075974s 1.093689424s 1.096569587s 1.102338486s 1.104782984s 1.109219143s 1.121341267s 1.126987891s 1.139899807s 1.146392497s 1.159982396s 1.175718983s 1.20190427s 1.203930817s 1.2182658s 1.228318778s 1.228726307s 1.234436423s 1.273413643s 1.275431515s 1.277113587s 1.283211753s 1.294491835s 1.296975984s 1.308519861s 1.312931003s 1.3293598s 1.356963981s 1.357660597s 1.359414451s 1.36741213s 1.378807953s 1.390845882s 1.393367164s 1.414469171s 1.423171265s 1.457587253s 1.470119774s 1.470605936s 1.477594028s 1.478090705s 1.498820239s 1.498940449s 1.504656461s 1.521687583s 1.522070169s 1.542122441s 1.548118194s 1.557924083s 1.570308478s 1.579196007s 1.668054174s 1.701836179s 1.738646134s]
Jan 28 00:27:15.286: INFO: 50 %ile: 1.005963765s
Jan 28 00:27:15.286: INFO: 90 %ile: 1.414469171s
Jan 28 00:27:15.286: INFO: 99 %ile: 1.701836179s
Jan 28 00:27:15.286: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:27:15.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1944" for this suite.

• [SLOW TEST:35.297 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":114,"skipped":2040,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:27:15.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:27:32.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4921" for this suite.

• [SLOW TEST:17.635 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":115,"skipped":2055,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:27:32.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-8569
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-8569
I0128 00:27:33.531277       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8569, replica count: 2
I0128 00:27:36.582259       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:27:39.582837       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:27:42.583412       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0128 00:27:45.583841       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 28 00:27:45.583: INFO: Creating new exec pod
Jan 28 00:27:56.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8569 execpoddsh55 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 28 00:27:57.124: INFO: stderr: "I0128 00:27:56.945789    2881 log.go:172] (0xc000bc8dc0) (0xc000a543c0) Create stream\nI0128 00:27:56.946188    2881 log.go:172] (0xc000bc8dc0) (0xc000a543c0) Stream added, broadcasting: 1\nI0128 00:27:56.952408    2881 log.go:172] (0xc000bc8dc0) Reply frame received for 1\nI0128 00:27:56.952570    2881 log.go:172] (0xc000bc8dc0) (0xc0009cc280) Create stream\nI0128 00:27:56.952593    2881 log.go:172] (0xc000bc8dc0) (0xc0009cc280) Stream added, broadcasting: 3\nI0128 00:27:56.956617    2881 log.go:172] (0xc000bc8dc0) Reply frame received for 3\nI0128 00:27:56.956855    2881 log.go:172] (0xc000bc8dc0) (0xc000a00320) Create stream\nI0128 00:27:56.956920    2881 log.go:172] (0xc000bc8dc0) (0xc000a00320) Stream added, broadcasting: 5\nI0128 00:27:56.961889    2881 log.go:172] (0xc000bc8dc0) Reply frame received for 5\nI0128 00:27:57.033987    2881 log.go:172] (0xc000bc8dc0) Data frame received for 5\nI0128 00:27:57.034088    2881 log.go:172] (0xc000a00320) (5) Data frame handling\nI0128 00:27:57.034122    2881 log.go:172] (0xc000a00320) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0128 00:27:57.040902    2881 log.go:172] (0xc000bc8dc0) Data frame received for 5\nI0128 00:27:57.041015    2881 log.go:172] (0xc000a00320) (5) Data frame handling\nI0128 00:27:57.041045    2881 log.go:172] (0xc000a00320) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0128 00:27:57.109678    2881 log.go:172] (0xc000bc8dc0) (0xc0009cc280) Stream removed, broadcasting: 3\nI0128 00:27:57.110183    2881 log.go:172] (0xc000bc8dc0) Data frame received for 1\nI0128 00:27:57.110215    2881 log.go:172] (0xc000a543c0) (1) Data frame handling\nI0128 00:27:57.110411    2881 log.go:172] (0xc000a543c0) (1) Data frame sent\nI0128 00:27:57.110429    2881 log.go:172] (0xc000bc8dc0) (0xc000a543c0) Stream removed, broadcasting: 1\nI0128 00:27:57.111598    2881 log.go:172] (0xc000bc8dc0) (0xc000a00320) Stream removed, broadcasting: 5\nI0128 00:27:57.112279    2881 log.go:172] (0xc000bc8dc0) Go away received\nI0128 00:27:57.112947    2881 log.go:172] (0xc000bc8dc0) (0xc000a543c0) Stream removed, broadcasting: 1\nI0128 00:27:57.112996    2881 log.go:172] (0xc000bc8dc0) (0xc0009cc280) Stream removed, broadcasting: 3\nI0128 00:27:57.113014    2881 log.go:172] (0xc000bc8dc0) (0xc000a00320) Stream removed, broadcasting: 5\n"
Jan 28 00:27:57.124: INFO: stdout: ""
Jan 28 00:27:57.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8569 execpoddsh55 -- /bin/sh -x -c nc -zv -t -w 2 10.96.86.48 80'
Jan 28 00:27:57.570: INFO: stderr: "I0128 00:27:57.337798    2900 log.go:172] (0xc000b36b00) (0xc000ab8280) Create stream\nI0128 00:27:57.338263    2900 log.go:172] (0xc000b36b00) (0xc000ab8280) Stream added, broadcasting: 1\nI0128 00:27:57.343230    2900 log.go:172] (0xc000b36b00) Reply frame received for 1\nI0128 00:27:57.343323    2900 log.go:172] (0xc000b36b00) (0xc000a6e000) Create stream\nI0128 00:27:57.343356    2900 log.go:172] (0xc000b36b00) (0xc000a6e000) Stream added, broadcasting: 3\nI0128 00:27:57.345347    2900 log.go:172] (0xc000b36b00) Reply frame received for 3\nI0128 00:27:57.345378    2900 log.go:172] (0xc000b36b00) (0xc000ab8320) Create stream\nI0128 00:27:57.345385    2900 log.go:172] (0xc000b36b00) (0xc000ab8320) Stream added, broadcasting: 5\nI0128 00:27:57.346945    2900 log.go:172] (0xc000b36b00) Reply frame received for 5\nI0128 00:27:57.425187    2900 log.go:172] (0xc000b36b00) Data frame received for 5\nI0128 00:27:57.425426    2900 log.go:172] (0xc000ab8320) (5) Data frame handling\nI0128 00:27:57.425502    2900 log.go:172] (0xc000ab8320) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.86.48 80\nI0128 00:27:57.433298    2900 log.go:172] (0xc000b36b00) Data frame received for 5\nI0128 00:27:57.433476    2900 log.go:172] (0xc000ab8320) (5) Data frame handling\nI0128 00:27:57.433506    2900 log.go:172] (0xc000ab8320) (5) Data frame sent\nConnection to 10.96.86.48 80 port [tcp/http] succeeded!\nI0128 00:27:57.555574    2900 log.go:172] (0xc000b36b00) (0xc000ab8320) Stream removed, broadcasting: 5\nI0128 00:27:57.555945    2900 log.go:172] (0xc000b36b00) Data frame received for 1\nI0128 00:27:57.555975    2900 log.go:172] (0xc000ab8280) (1) Data frame handling\nI0128 00:27:57.556014    2900 log.go:172] (0xc000ab8280) (1) Data frame sent\nI0128 00:27:57.556108    2900 log.go:172] (0xc000b36b00) (0xc000ab8280) Stream removed, broadcasting: 1\nI0128 00:27:57.557464    2900 log.go:172] (0xc000b36b00) (0xc000a6e000) Stream removed, broadcasting: 3\nI0128 00:27:57.557724    2900 log.go:172] (0xc000b36b00) Go away received\nI0128 00:27:57.557871    2900 log.go:172] (0xc000b36b00) (0xc000ab8280) Stream removed, broadcasting: 1\nI0128 00:27:57.557934    2900 log.go:172] (0xc000b36b00) (0xc000a6e000) Stream removed, broadcasting: 3\nI0128 00:27:57.557948    2900 log.go:172] (0xc000b36b00) (0xc000ab8320) Stream removed, broadcasting: 5\n"
Jan 28 00:27:57.570: INFO: stdout: ""
Jan 28 00:27:57.570: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:27:57.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8569" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:24.801 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":116,"skipped":2063,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:27:57.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:27:58.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 28 00:27:58.581: INFO: stderr: ""
Jan 28 00:27:58.581: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 28 00:27:58.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1542'
Jan 28 00:27:58.986: INFO: stderr: ""
Jan 28 00:27:58.987: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 28 00:27:59.993: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:27:59.993: INFO: Found 0 / 1
Jan 28 00:28:00.995: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:00.996: INFO: Found 0 / 1
Jan 28 00:28:02.031: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:02.031: INFO: Found 0 / 1
Jan 28 00:28:03.026: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:03.026: INFO: Found 0 / 1
Jan 28 00:28:04.000: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:04.000: INFO: Found 0 / 1
Jan 28 00:28:04.994: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:04.994: INFO: Found 0 / 1
Jan 28 00:28:06.601: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:06.601: INFO: Found 0 / 1
Jan 28 00:28:06.993: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:06.993: INFO: Found 0 / 1
Jan 28 00:28:07.994: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:07.994: INFO: Found 0 / 1
Jan 28 00:28:09.319: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:09.320: INFO: Found 0 / 1
Jan 28 00:28:09.998: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:09.998: INFO: Found 0 / 1
Jan 28 00:28:10.994: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:10.994: INFO: Found 1 / 1
Jan 28 00:28:10.994: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 28 00:28:10.999: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 00:28:10.999: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 00:28:10.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-sk8d6 --namespace=kubectl-1542'
Jan 28 00:28:11.214: INFO: stderr: ""
Jan 28 00:28:11.214: INFO: stdout: "Name:         agnhost-master-sk8d6\nNamespace:    kubectl-1542\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Tue, 28 Jan 2020 00:27:58 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.3\nIPs:\n  IP:           10.44.0.3\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://af98e68a672726462feb2c2b986a48b7e0766079dc1a507540da972014d606ab\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 28 Jan 2020 00:28:08 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8bgdj (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-8bgdj:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8bgdj\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-1542/agnhost-master-sk8d6 to jerma-node\n  Normal  Pulled     10s        kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    5s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    3s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 28 00:28:11.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1542'
Jan 28 00:28:11.389: INFO: stderr: ""
Jan 28 00:28:11.389: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-1542\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  13s   replication-controller  Created pod: agnhost-master-sk8d6\n"
Jan 28 00:28:11.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1542'
Jan 28 00:28:11.551: INFO: stderr: ""
Jan 28 00:28:11.552: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-1542\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.180.180\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.3:6379\nSession Affinity:  None\nEvents:            \n"
Jan 28 00:28:11.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 28 00:28:11.714: INFO: stderr: ""
Jan 28 00:28:11.714: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Tue, 28 Jan 2020 00:28:04 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 28 Jan 2020 00:25:15 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 28 Jan 2020 00:25:15 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 28 Jan 2020 00:25:15 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 28 Jan 2020 00:25:15 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         23d\n  kubectl-1542                agnhost-master-sk8d6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 28 00:28:11.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1542'
Jan 28 00:28:11.834: INFO: stderr: ""
Jan 28 00:28:11.834: INFO: stdout: "Name:         kubectl-1542\nLabels:       e2e-framework=kubectl\n              e2e-run=3bea6878-9807-4fe5-b87e-742f73226a44\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:28:11.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1542" for this suite.

• [SLOW TEST:14.062 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":117,"skipped":2084,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:28:11.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:28:11.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042" in namespace "downward-api-8219" to be "success or failure"
Jan 28 00:28:11.969: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Pending", Reason="", readiness=false. Elapsed: 55.784997ms
Jan 28 00:28:13.975: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062238723s
Jan 28 00:28:15.979: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066545737s
Jan 28 00:28:17.985: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071787621s
Jan 28 00:28:19.989: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076217475s
Jan 28 00:28:22.001: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.087667318s
STEP: Saw pod success
Jan 28 00:28:22.001: INFO: Pod "downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042" satisfied condition "success or failure"
Jan 28 00:28:22.013: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042 container client-container: 
STEP: delete the pod
Jan 28 00:28:22.113: INFO: Waiting for pod downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042 to disappear
Jan 28 00:28:22.127: INFO: Pod downwardapi-volume-f70cbe1c-4685-49f1-9fcf-e5e775a47042 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:28:22.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8219" for this suite.

• [SLOW TEST:10.368 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":118,"skipped":2087,"failed":0}
SSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:28:22.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:28:22.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:28:30.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7531" for this suite.

• [SLOW TEST:8.574 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":2092,"failed":0}
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:28:30.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:28:30.868: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 28 00:28:33.998: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:28:34.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5825" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":120,"skipped":2092,"failed":0}
SSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:28:34.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 00:28:35.344: INFO: Number of nodes with available pods: 0
Jan 28 00:28:35.344: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:37.673: INFO: Number of nodes with available pods: 0
Jan 28 00:28:37.673: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:39.512: INFO: Number of nodes with available pods: 0
Jan 28 00:28:39.512: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:41.570: INFO: Number of nodes with available pods: 0
Jan 28 00:28:41.570: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:42.728: INFO: Number of nodes with available pods: 0
Jan 28 00:28:42.728: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:44.151: INFO: Number of nodes with available pods: 0
Jan 28 00:28:44.151: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:44.804: INFO: Number of nodes with available pods: 0
Jan 28 00:28:44.804: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:45.540: INFO: Number of nodes with available pods: 0
Jan 28 00:28:45.540: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:48.108: INFO: Number of nodes with available pods: 0
Jan 28 00:28:48.108: INFO: Node jerma-node is running more than one daemon pod
Jan 28 00:28:48.824: INFO: Number of nodes with available pods: 1
Jan 28 00:28:48.824: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:49.599: INFO: Number of nodes with available pods: 1
Jan 28 00:28:49.599: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:50.357: INFO: Number of nodes with available pods: 1
Jan 28 00:28:50.357: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:51.357: INFO: Number of nodes with available pods: 2
Jan 28 00:28:51.357: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 28 00:28:51.415: INFO: Number of nodes with available pods: 1
Jan 28 00:28:51.416: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:52.430: INFO: Number of nodes with available pods: 1
Jan 28 00:28:52.430: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:53.430: INFO: Number of nodes with available pods: 1
Jan 28 00:28:53.430: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:54.427: INFO: Number of nodes with available pods: 1
Jan 28 00:28:54.428: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:55.428: INFO: Number of nodes with available pods: 1
Jan 28 00:28:55.429: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:56.439: INFO: Number of nodes with available pods: 1
Jan 28 00:28:56.439: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:57.435: INFO: Number of nodes with available pods: 1
Jan 28 00:28:57.435: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:58.428: INFO: Number of nodes with available pods: 1
Jan 28 00:28:58.428: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:28:59.428: INFO: Number of nodes with available pods: 1
Jan 28 00:28:59.428: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:00.431: INFO: Number of nodes with available pods: 1
Jan 28 00:29:00.431: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:02.707: INFO: Number of nodes with available pods: 1
Jan 28 00:29:02.707: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:03.735: INFO: Number of nodes with available pods: 1
Jan 28 00:29:03.735: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:04.435: INFO: Number of nodes with available pods: 1
Jan 28 00:29:04.435: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:05.439: INFO: Number of nodes with available pods: 1
Jan 28 00:29:05.439: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 00:29:06.432: INFO: Number of nodes with available pods: 2
Jan 28 00:29:06.432: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7100, will wait for the garbage collector to delete the pods
Jan 28 00:29:06.512: INFO: Deleting DaemonSet.extensions daemon-set took: 13.685075ms
Jan 28 00:29:07.013: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.736792ms
Jan 28 00:29:13.636: INFO: Number of nodes with available pods: 0
Jan 28 00:29:13.636: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 00:29:13.642: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7100/daemonsets","resourceVersion":"4779448"},"items":null}

Jan 28 00:29:13.657: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7100/pods","resourceVersion":"4779449"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:29:13.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7100" for this suite.

• [SLOW TEST:39.258 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":121,"skipped":2096,"failed":0}
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:29:13.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-788bb0e2-c3e8-4c4d-bb14-fafd2a71c3f0 in namespace container-probe-7148
Jan 28 00:29:21.833: INFO: Started pod liveness-788bb0e2-c3e8-4c4d-bb14-fafd2a71c3f0 in namespace container-probe-7148
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 00:29:21.852: INFO: Initial restart count of pod liveness-788bb0e2-c3e8-4c4d-bb14-fafd2a71c3f0 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:33:22.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7148" for this suite.

• [SLOW TEST:248.865 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":122,"skipped":2096,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:33:22.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 28 00:33:22.699: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 00:33:22.713: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 00:33:22.716: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 00:33:22.740: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.740: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:33:22.740: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 00:33:22.740: INFO: 	Container weave ready: true, restart count 1
Jan 28 00:33:22.740: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:33:22.740: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 00:33:22.837: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container weave ready: true, restart count 0
Jan 28 00:33:22.837: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:33:22.837: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 00:33:22.837: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:33:22.837: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 00:33:22.837: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 00:33:22.837: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container etcd ready: true, restart count 1
Jan 28 00:33:22.837: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container coredns ready: true, restart count 0
Jan 28 00:33:22.837: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:33:22.837: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2f2504bb-39f4-47c8-ac50-848eba03c067 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-2f2504bb-39f4-47c8-ac50-848eba03c067 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2f2504bb-39f4-47c8-ac50-848eba03c067
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:38:41.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7786" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:318.788 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":123,"skipped":2114,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:38:41.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:38:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8989" for this suite.

• [SLOW TEST:11.226 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":124,"skipped":2120,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:38:52.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 00:38:52.665: INFO: Waiting up to 5m0s for pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f" in namespace "emptydir-7324" to be "success or failure"
Jan 28 00:38:52.690: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.124467ms
Jan 28 00:38:54.701: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035634473s
Jan 28 00:38:56.712: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046824319s
Jan 28 00:38:58.762: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096537256s
Jan 28 00:39:00.777: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111556851s
STEP: Saw pod success
Jan 28 00:39:00.777: INFO: Pod "pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f" satisfied condition "success or failure"
Jan 28 00:39:00.787: INFO: Trying to get logs from node jerma-node pod pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f container test-container: 
STEP: delete the pod
Jan 28 00:39:00.875: INFO: Waiting for pod pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f to disappear
Jan 28 00:39:00.925: INFO: Pod pod-ece491ba-8af1-4a33-a9fd-4b77defbca2f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:00.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7324" for this suite.

• [SLOW TEST:8.381 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":125,"skipped":2154,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:00.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Jan 28 00:39:01.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 28 00:39:03.509: INFO: stderr: ""
Jan 28 00:39:03.509: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:03.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4154" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":280,"completed":126,"skipped":2175,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:03.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:39:03.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b" in namespace "projected-3483" to be "success or failure"
Jan 28 00:39:03.934: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b": Phase="Pending", Reason="", readiness=false. Elapsed: 170.754909ms
Jan 28 00:39:05.942: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178990497s
Jan 28 00:39:07.950: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.186983155s
Jan 28 00:39:09.956: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.192713003s
Jan 28 00:39:11.996: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.232459025s
STEP: Saw pod success
Jan 28 00:39:11.996: INFO: Pod "downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b" satisfied condition "success or failure"
Jan 28 00:39:12.004: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b container client-container: 
STEP: delete the pod
Jan 28 00:39:12.091: INFO: Waiting for pod downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b to disappear
Jan 28 00:39:12.152: INFO: Pod downwardapi-volume-21ddb020-3b95-48e2-9635-e73e9135922b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:12.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3483" for this suite.

• [SLOW TEST:8.642 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":127,"skipped":2185,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:12.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:39:12.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435" in namespace "projected-2803" to be "success or failure"
Jan 28 00:39:12.353: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435": Phase="Pending", Reason="", readiness=false. Elapsed: 9.813375ms
Jan 28 00:39:14.363: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019627932s
Jan 28 00:39:16.370: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026656959s
Jan 28 00:39:18.376: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032476083s
Jan 28 00:39:20.435: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.091794763s
STEP: Saw pod success
Jan 28 00:39:20.435: INFO: Pod "downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435" satisfied condition "success or failure"
Jan 28 00:39:20.442: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435 container client-container: 
STEP: delete the pod
Jan 28 00:39:20.595: INFO: Waiting for pod downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435 to disappear
Jan 28 00:39:20.604: INFO: Pod downwardapi-volume-cabd757d-af66-4b2c-8ce9-11486e29c435 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:20.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2803" for this suite.

• [SLOW TEST:8.458 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":128,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:20.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 28 00:39:21.018: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781024 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:39:21.019: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781025 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:39:21.019: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781026 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 28 00:39:31.105: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781060 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:39:31.105: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781061 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:39:31.106: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-5190 /api/v1/namespaces/watch-5190/configmaps/e2e-watch-test-label-changed 62775417-a455-480a-8c84-79b8ce725e3a 4781062 0 2020-01-28 00:39:20 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:31.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5190" for this suite.

• [SLOW TEST:10.527 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":129,"skipped":2224,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:31.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 00:39:31.322: INFO: Waiting up to 5m0s for pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451" in namespace "emptydir-8095" to be "success or failure"
Jan 28 00:39:31.339: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451": Phase="Pending", Reason="", readiness=false. Elapsed: 16.479814ms
Jan 28 00:39:33.345: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02229049s
Jan 28 00:39:35.354: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032141374s
Jan 28 00:39:37.363: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040222902s
Jan 28 00:39:39.372: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049369978s
STEP: Saw pod success
Jan 28 00:39:39.372: INFO: Pod "pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451" satisfied condition "success or failure"
Jan 28 00:39:39.377: INFO: Trying to get logs from node jerma-node pod pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451 container test-container: 
STEP: delete the pod
Jan 28 00:39:39.428: INFO: Waiting for pod pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451 to disappear
Jan 28 00:39:39.555: INFO: Pod pod-87e6d2e1-63be-4a5a-8b59-df2ad0977451 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:39.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8095" for this suite.

• [SLOW TEST:8.426 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":2229,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:39.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:39:39.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8" in namespace "downward-api-575" to be "success or failure"
Jan 28 00:39:39.792: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 36.601085ms
Jan 28 00:39:41.801: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046163295s
Jan 28 00:39:43.814: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058782534s
Jan 28 00:39:45.862: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106530762s
Jan 28 00:39:47.868: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.112965257s
STEP: Saw pod success
Jan 28 00:39:47.868: INFO: Pod "downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8" satisfied condition "success or failure"
Jan 28 00:39:47.872: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8 container client-container: 
STEP: delete the pod
Jan 28 00:39:47.970: INFO: Waiting for pod downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8 to disappear
Jan 28 00:39:47.989: INFO: Pod downwardapi-volume-42962981-3458-4089-9dc9-7bb4854de8b8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:47.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-575" for this suite.

• [SLOW TEST:8.431 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":131,"skipped":2255,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:48.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 00:39:48.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8951'
Jan 28 00:39:48.284: INFO: stderr: ""
Jan 28 00:39:48.284: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Jan 28 00:39:48.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8951'
Jan 28 00:39:48.696: INFO: stderr: ""
Jan 28 00:39:48.696: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:48.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8951" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":132,"skipped":2260,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:48.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-8574750a-c374-42d4-b65e-87a2bd6996b2
STEP: Creating a pod to test consume secrets
Jan 28 00:39:48.893: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81" in namespace "projected-3451" to be "success or failure"
Jan 28 00:39:48.905: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Pending", Reason="", readiness=false. Elapsed: 11.766717ms
Jan 28 00:39:50.911: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017872693s
Jan 28 00:39:52.916: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023302084s
Jan 28 00:39:54.922: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029403453s
Jan 28 00:39:56.929: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036585337s
Jan 28 00:39:58.938: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.04484724s
STEP: Saw pod success
Jan 28 00:39:58.938: INFO: Pod "pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81" satisfied condition "success or failure"
Jan 28 00:39:58.944: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 00:39:59.016: INFO: Waiting for pod pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81 to disappear
Jan 28 00:39:59.045: INFO: Pod pod-projected-secrets-50ca95c8-135f-4167-bda7-d372bea61b81 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:59.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3451" for this suite.

• [SLOW TEST:10.349 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":133,"skipped":2299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:59.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:39:59.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-221" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":134,"skipped":2329,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:39:59.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:40:10.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9449" for this suite.

• [SLOW TEST:11.263 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":135,"skipped":2337,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:40:10.397: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 28 00:40:16.790: INFO: 0 pods remaining
Jan 28 00:40:16.790: INFO: 0 pods has nil DeletionTimestamp
Jan 28 00:40:16.790: INFO: 
STEP: Gathering metrics
W0128 00:40:17.801842       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 00:40:17.802: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:40:17.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5599" for this suite.

• [SLOW TEST:7.421 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":136,"skipped":2340,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:40:17.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-b88ab592-6fe1-4b03-971a-5f1e37400226
STEP: Creating a pod to test consume configMaps
Jan 28 00:40:18.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e" in namespace "configmap-8809" to be "success or failure"
Jan 28 00:40:18.682: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 91.347134ms
Jan 28 00:40:22.795: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.204472426s
Jan 28 00:40:25.008: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41804532s
Jan 28 00:40:27.017: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.426322199s
Jan 28 00:40:29.023: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.432957809s
Jan 28 00:40:31.031: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.440202013s
Jan 28 00:40:33.043: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.452625053s
STEP: Saw pod success
Jan 28 00:40:33.043: INFO: Pod "pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e" satisfied condition "success or failure"
Jan 28 00:40:33.047: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e container configmap-volume-test: 
STEP: delete the pod
Jan 28 00:40:33.118: INFO: Waiting for pod pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e to disappear
Jan 28 00:40:33.135: INFO: Pod pod-configmaps-d5ba4590-d2a1-49b2-811e-7e0867543a2e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:40:33.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8809" for this suite.

• [SLOW TEST:15.368 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":137,"skipped":2341,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:40:33.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 00:40:33.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3109'
Jan 28 00:40:33.534: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 00:40:33.534: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740
Jan 28 00:40:35.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3109'
Jan 28 00:40:35.929: INFO: stderr: ""
Jan 28 00:40:35.929: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:40:35.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3109" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":280,"completed":138,"skipped":2367,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:40:35.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 28 00:40:38.012: INFO: Pod name wrapped-volume-race-027394a9-0cd9-4667-86a4-1f77e356b1b1: Found 0 pods out of 5
Jan 28 00:40:43.023: INFO: Pod name wrapped-volume-race-027394a9-0cd9-4667-86a4-1f77e356b1b1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-027394a9-0cd9-4667-86a4-1f77e356b1b1 in namespace emptydir-wrapper-7955, will wait for the garbage collector to delete the pods
Jan 28 00:41:09.132: INFO: Deleting ReplicationController wrapped-volume-race-027394a9-0cd9-4667-86a4-1f77e356b1b1 took: 18.629556ms
Jan 28 00:41:09.534: INFO: Terminating ReplicationController wrapped-volume-race-027394a9-0cd9-4667-86a4-1f77e356b1b1 pods took: 401.2858ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 00:41:32.509: INFO: Pod name wrapped-volume-race-da189806-f980-4bee-9499-3b7fb28f0497: Found 0 pods out of 5
Jan 28 00:41:37.530: INFO: Pod name wrapped-volume-race-da189806-f980-4bee-9499-3b7fb28f0497: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-da189806-f980-4bee-9499-3b7fb28f0497 in namespace emptydir-wrapper-7955, will wait for the garbage collector to delete the pods
Jan 28 00:42:03.689: INFO: Deleting ReplicationController wrapped-volume-race-da189806-f980-4bee-9499-3b7fb28f0497 took: 12.190091ms
Jan 28 00:42:04.090: INFO: Terminating ReplicationController wrapped-volume-race-da189806-f980-4bee-9499-3b7fb28f0497 pods took: 400.369898ms
STEP: Creating RC which spawns configmap-volume pods
Jan 28 00:42:23.331: INFO: Pod name wrapped-volume-race-7f031639-d913-41d0-b9e2-568e4f646e03: Found 0 pods out of 5
Jan 28 00:42:28.343: INFO: Pod name wrapped-volume-race-7f031639-d913-41d0-b9e2-568e4f646e03: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7f031639-d913-41d0-b9e2-568e4f646e03 in namespace emptydir-wrapper-7955, will wait for the garbage collector to delete the pods
Jan 28 00:42:58.437: INFO: Deleting ReplicationController wrapped-volume-race-7f031639-d913-41d0-b9e2-568e4f646e03 took: 11.758601ms
Jan 28 00:42:58.938: INFO: Terminating ReplicationController wrapped-volume-race-7f031639-d913-41d0-b9e2-568e4f646e03 pods took: 500.598704ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:43:14.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7955" for this suite.

• [SLOW TEST:158.261 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":139,"skipped":2397,"failed":0}
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:43:14.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-1577
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1577
STEP: Deleting pre-stop pod
Jan 28 00:43:33.448: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:43:33.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1577" for this suite.

• [SLOW TEST:19.298 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":140,"skipped":2398,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:43:33.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-10517e42-2e08-43c5-9717-fed1b90cb37d
STEP: Creating a pod to test consume secrets
Jan 28 00:43:33.634: INFO: Waiting up to 5m0s for pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0" in namespace "secrets-147" to be "success or failure"
Jan 28 00:43:33.683: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0": Phase="Pending", Reason="", readiness=false. Elapsed: 48.853167ms
Jan 28 00:43:35.695: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060515068s
Jan 28 00:43:42.031: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.396491228s
Jan 28 00:43:44.035: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.400278279s
Jan 28 00:43:46.041: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.407143147s
STEP: Saw pod success
Jan 28 00:43:46.041: INFO: Pod "pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0" satisfied condition "success or failure"
Jan 28 00:43:46.047: INFO: Trying to get logs from node jerma-node pod pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0 container secret-volume-test: 
STEP: delete the pod
Jan 28 00:43:46.100: INFO: Waiting for pod pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0 to disappear
Jan 28 00:43:46.115: INFO: Pod pod-secrets-7b2f8a04-86d4-4585-a848-19e0cbf658f0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:43:46.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-147" for this suite.

• [SLOW TEST:12.635 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2411,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:43:46.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:43:46.322: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 28 00:43:51.435: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 00:43:55.464: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 28 00:43:55.552: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1815 /apis/apps/v1/namespaces/deployment-1815/deployments/test-cleanup-deployment cab8fbfd-8cfc-49a2-b3e5-4a35a43e2444 4782823 1 2020-01-28 00:43:55 +0000 UTC   map[name:cleanup-pod] map[] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0037a9008  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},}

Jan 28 00:43:55.578: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-1815 /apis/apps/v1/namespaces/deployment-1815/replicasets/test-cleanup-deployment-55ffc6b7b6 b725e9a3-2295-4135-b4d5-459e6ed80df5 4782825 1 2020-01-28 00:43:55 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment cab8fbfd-8cfc-49a2-b3e5-4a35a43e2444 0xc0027f3f47 0xc0027f3f48}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0027f3fb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:43:55.578: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan 28 00:43:55.578: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller  deployment-1815 /apis/apps/v1/namespaces/deployment-1815/replicasets/test-cleanup-controller be1982d4-b61d-4441-9023-ed7c74aed896 4782824 1 2020-01-28 00:43:46 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment cab8fbfd-8cfc-49a2-b3e5-4a35a43e2444 0xc0027f3e77 0xc0027f3e78}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0027f3ed8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:43:55.603: INFO: Pod "test-cleanup-controller-k4dwg" is available:
&Pod{ObjectMeta:{test-cleanup-controller-k4dwg test-cleanup-controller- deployment-1815 /api/v1/namespaces/deployment-1815/pods/test-cleanup-controller-k4dwg f10855af-15a5-4080-ba29-08f57a561e59 4782816 0 2020-01-28 00:43:46 +0000 UTC   map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller be1982d4-b61d-4441-9023-ed7c74aed896 0xc003748557 0xc003748558}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jzkjd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jzkjd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jzkjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:43:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:43:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:43:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:43:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-28 00:43:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 00:43:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://fd424a33ce1007cbcae3e3ede12d4357d252f4dc3e4ec4657df28fb45cdea65e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 00:43:55.603: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-b69ds" is not available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-b69ds test-cleanup-deployment-55ffc6b7b6- deployment-1815 /api/v1/namespaces/deployment-1815/pods/test-cleanup-deployment-55ffc6b7b6-b69ds 8a69a3c1-30d2-456c-bbf9-43ae3383126e 4782830 0 2020-01-28 00:43:55 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 b725e9a3-2295-4135-b4d5-459e6ed80df5 0xc003748837 0xc003748838}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jzkjd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jzkjd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jzkjd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:43:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:43:55.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1815" for this suite.

• [SLOW TEST:9.491 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":142,"skipped":2419,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:43:55.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-0ecc7462-a64f-4a58-a577-06fb8109b1df
STEP: Creating a pod to test consume secrets
Jan 28 00:43:55.916: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd" in namespace "projected-6777" to be "success or failure"
Jan 28 00:43:55.929: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.609516ms
Jan 28 00:43:57.935: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018600563s
Jan 28 00:43:59.946: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030179294s
Jan 28 00:44:01.951: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034702119s
Jan 28 00:44:03.992: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076083915s
Jan 28 00:44:05.997: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081166489s
Jan 28 00:44:08.003: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.086929701s
Jan 28 00:44:10.008: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.092286981s
STEP: Saw pod success
Jan 28 00:44:10.009: INFO: Pod "pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd" satisfied condition "success or failure"
Jan 28 00:44:10.042: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 00:44:10.095: INFO: Waiting for pod pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd to disappear
Jan 28 00:44:10.127: INFO: Pod pod-projected-secrets-051d8ec4-0081-4d93-a81c-dc69303381cd no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:10.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6777" for this suite.

• [SLOW TEST:14.508 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":143,"skipped":2425,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:10.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:44:10.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 00:44:13.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1226 create -f -'
Jan 28 00:44:16.214: INFO: stderr: ""
Jan 28 00:44:16.214: INFO: stdout: "e2e-test-crd-publish-openapi-3805-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 28 00:44:16.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1226 delete e2e-test-crd-publish-openapi-3805-crds test-cr'
Jan 28 00:44:16.388: INFO: stderr: ""
Jan 28 00:44:16.388: INFO: stdout: "e2e-test-crd-publish-openapi-3805-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 28 00:44:16.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1226 apply -f -'
Jan 28 00:44:16.718: INFO: stderr: ""
Jan 28 00:44:16.718: INFO: stdout: "e2e-test-crd-publish-openapi-3805-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 28 00:44:16.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1226 delete e2e-test-crd-publish-openapi-3805-crds test-cr'
Jan 28 00:44:16.889: INFO: stderr: ""
Jan 28 00:44:16.889: INFO: stdout: "e2e-test-crd-publish-openapi-3805-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 28 00:44:16.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3805-crds'
Jan 28 00:44:17.168: INFO: stderr: ""
Jan 28 00:44:17.168: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3805-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:20.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1226" for this suite.

• [SLOW TEST:10.002 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":144,"skipped":2437,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:20.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 28 00:44:20.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 28 00:44:20.458: INFO: stderr: ""
Jan 28 00:44:20.458: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:20.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7325" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":145,"skipped":2454,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:20.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 28 00:44:29.175: INFO: Successfully updated pod "labelsupdate48a1e1ec-ca39-475a-adbf-045582eb928b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:31.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3274" for this suite.

• [SLOW TEST:10.821 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2455,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:31.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: executing a command with run --rm and attach with stdin
Jan 28 00:44:31.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3226 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan 28 00:44:38.911: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0128 00:44:37.628411    3305 log.go:172] (0xc000a0d1e0) (0xc000b183c0) Create stream\nI0128 00:44:37.628702    3305 log.go:172] (0xc000a0d1e0) (0xc000b183c0) Stream added, broadcasting: 1\nI0128 00:44:37.634988    3305 log.go:172] (0xc000a0d1e0) Reply frame received for 1\nI0128 00:44:37.635123    3305 log.go:172] (0xc000a0d1e0) (0xc0009b8000) Create stream\nI0128 00:44:37.635155    3305 log.go:172] (0xc000a0d1e0) (0xc0009b8000) Stream added, broadcasting: 3\nI0128 00:44:37.638335    3305 log.go:172] (0xc000a0d1e0) Reply frame received for 3\nI0128 00:44:37.638469    3305 log.go:172] (0xc000a0d1e0) (0xc0009b80a0) Create stream\nI0128 00:44:37.638485    3305 log.go:172] (0xc000a0d1e0) (0xc0009b80a0) Stream added, broadcasting: 5\nI0128 00:44:37.639941    3305 log.go:172] (0xc000a0d1e0) Reply frame received for 5\nI0128 00:44:37.639981    3305 log.go:172] (0xc000a0d1e0) (0xc000b18460) Create stream\nI0128 00:44:37.639996    3305 log.go:172] (0xc000a0d1e0) (0xc000b18460) Stream added, broadcasting: 7\nI0128 00:44:37.641279    3305 log.go:172] (0xc000a0d1e0) Reply frame received for 7\nI0128 00:44:37.641650    3305 log.go:172] (0xc0009b8000) (3) Writing data frame\nI0128 00:44:37.641813    3305 log.go:172] (0xc0009b8000) (3) Writing data frame\nI0128 00:44:37.643734    3305 log.go:172] (0xc000a0d1e0) Data frame received for 5\nI0128 00:44:37.643757    3305 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0128 00:44:37.643776    3305 log.go:172] (0xc0009b80a0) (5) Data frame sent\nI0128 00:44:37.645361    3305 log.go:172] (0xc000a0d1e0) Data frame received for 5\nI0128 00:44:37.645377    3305 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0128 00:44:37.645384    3305 log.go:172] (0xc0009b80a0) (5) Data frame sent\nI0128 00:44:38.821914    3305 log.go:172] (0xc000a0d1e0) Data frame received for 1\nI0128 00:44:38.822587    3305 log.go:172] (0xc000a0d1e0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0128 00:44:38.822756    3305 log.go:172] (0xc000b183c0) (1) Data frame handling\nI0128 00:44:38.823065    3305 log.go:172] (0xc000a0d1e0) (0xc000b18460) Stream removed, broadcasting: 7\nI0128 00:44:38.823255    3305 log.go:172] (0xc000a0d1e0) (0xc0009b80a0) Stream removed, broadcasting: 5\nI0128 00:44:38.823406    3305 log.go:172] (0xc000b183c0) (1) Data frame sent\nI0128 00:44:38.823555    3305 log.go:172] (0xc000a0d1e0) (0xc000b183c0) Stream removed, broadcasting: 1\nI0128 00:44:38.823623    3305 log.go:172] (0xc000a0d1e0) Go away received\nI0128 00:44:38.826607    3305 log.go:172] (0xc000a0d1e0) (0xc000b183c0) Stream removed, broadcasting: 1\nI0128 00:44:38.826676    3305 log.go:172] (0xc000a0d1e0) (0xc0009b8000) Stream removed, broadcasting: 3\nI0128 00:44:38.826693    3305 log.go:172] (0xc000a0d1e0) (0xc0009b80a0) Stream removed, broadcasting: 5\nI0128 00:44:38.826708    3305 log.go:172] (0xc000a0d1e0) (0xc000b18460) Stream removed, broadcasting: 7\n"
Jan 28 00:44:38.911: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:40.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3226" for this suite.

• [SLOW TEST:9.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job  [Conformance]","total":280,"completed":147,"skipped":2476,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:40.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 28 00:44:41.037: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 00:44:41.044: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 00:44:41.047: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 00:44:41.053: INFO: labelsupdate48a1e1ec-ca39-475a-adbf-045582eb928b from projected-3274 started at 2020-01-28 00:44:20 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.053: INFO: 	Container client-container ready: false, restart count 0
Jan 28 00:44:41.053: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.053: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:44:41.053: INFO: e2e-test-rm-busybox-job-86kl6 from kubectl-3226 started at 2020-01-28 00:44:31 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.053: INFO: 	Container e2e-test-rm-busybox-job ready: false, restart count 0
Jan 28 00:44:41.053: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 00:44:41.053: INFO: 	Container weave ready: true, restart count 1
Jan 28 00:44:41.053: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:44:41.053: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 00:44:41.077: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 00:44:41.077: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 00:44:41.077: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container weave ready: true, restart count 0
Jan 28 00:44:41.077: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 00:44:41.077: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 00:44:41.077: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 00:44:41.077: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container etcd ready: true, restart count 1
Jan 28 00:44:41.077: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container coredns ready: true, restart count 0
Jan 28 00:44:41.077: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 00:44:41.077: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-c5ace745-8d69-42ab-941b-a6c6805b7118 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-c5ace745-8d69-42ab-941b-a6c6805b7118 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-c5ace745-8d69-42ab-941b-a6c6805b7118
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:44:59.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3578" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:18.594 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":280,"completed":148,"skipped":2489,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:44:59.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 00:44:59.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-105'
Jan 28 00:44:59.858: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 00:44:59.858: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Jan 28 00:44:59.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-105'
Jan 28 00:45:00.093: INFO: stderr: ""
Jan 28 00:45:00.093: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:00.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-105" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":280,"completed":149,"skipped":2527,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:00.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 00:45:10.407: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:10.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1865" for this suite.

• [SLOW TEST:10.383 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":150,"skipped":2592,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:10.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:24.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4725" for this suite.

• [SLOW TEST:13.997 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":151,"skipped":2597,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:24.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 28 00:45:33.201: INFO: Successfully updated pod "annotationupdate3b3d23d5-66ed-4dad-9fad-2be23aadaecb"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:35.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8730" for this suite.

• [SLOW TEST:10.798 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2602,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:35.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Jan 28 00:45:35.404: INFO: Waiting up to 5m0s for pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c" in namespace "containers-9390" to be "success or failure"
Jan 28 00:45:35.420: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.96536ms
Jan 28 00:45:37.423: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018647531s
Jan 28 00:45:39.430: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025624618s
Jan 28 00:45:41.457: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052149644s
Jan 28 00:45:43.463: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058447538s
STEP: Saw pod success
Jan 28 00:45:43.463: INFO: Pod "client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c" satisfied condition "success or failure"
Jan 28 00:45:43.466: INFO: Trying to get logs from node jerma-node pod client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c container test-container: 
STEP: delete the pod
Jan 28 00:45:43.547: INFO: Waiting for pod client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c to disappear
Jan 28 00:45:43.552: INFO: Pod client-containers-8a80ce42-09c5-46bc-9541-2367bf588d4c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:43.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9390" for this suite.

• [SLOW TEST:8.272 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2602,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:43.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 28 00:45:52.278: INFO: Successfully updated pod "pod-update-activedeadlineseconds-456163fd-8cbe-431a-bd68-7b09e44b64f6"
Jan 28 00:45:52.278: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-456163fd-8cbe-431a-bd68-7b09e44b64f6" in namespace "pods-5580" to be "terminated due to deadline exceeded"
Jan 28 00:45:52.284: INFO: Pod "pod-update-activedeadlineseconds-456163fd-8cbe-431a-bd68-7b09e44b64f6": Phase="Running", Reason="", readiness=true. Elapsed: 5.938821ms
Jan 28 00:45:54.293: INFO: Pod "pod-update-activedeadlineseconds-456163fd-8cbe-431a-bd68-7b09e44b64f6": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.015270705s
Jan 28 00:45:54.294: INFO: Pod "pod-update-activedeadlineseconds-456163fd-8cbe-431a-bd68-7b09e44b64f6" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:45:54.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5580" for this suite.

• [SLOW TEST:10.774 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":154,"skipped":2618,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:45:54.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9813.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9813.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9813.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.183.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.183.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.183.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.183.112_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9813.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9813.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9813.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9813.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9813.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9813.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 112.183.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.183.112_udp@PTR;check="$$(dig +tcp +noall +answer +search 112.183.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.183.112_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 00:46:06.916: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:06.976: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.038: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.046: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.089: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.096: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.098: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:07.116: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:12.126: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.134: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.139: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.143: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.173: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:12.205: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:17.129: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.140: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.165: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.200: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.203: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.207: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.210: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:17.419: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:22.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.132: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.138: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.141: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.174: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.177: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.180: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.183: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:22.199: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:27.127: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.138: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.145: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.150: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.181: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.201: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.206: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:27.236: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:32.124: INFO: Unable to read wheezy_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.129: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.140: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.146: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.215: INFO: Unable to read jessie_udp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.227: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.231: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local from pod dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7: the server could not find the requested resource (get pods dns-test-4a13da5c-3f1a-4854-8119-382732437bc7)
Jan 28 00:46:32.249: INFO: Lookups using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 failed for: [wheezy_udp@dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@dns-test-service.dns-9813.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_udp@dns-test-service.dns-9813.svc.cluster.local jessie_tcp@dns-test-service.dns-9813.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9813.svc.cluster.local]

Jan 28 00:46:37.191: INFO: DNS probes using dns-9813/dns-test-4a13da5c-3f1a-4854-8119-382732437bc7 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:46:37.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9813" for this suite.

• [SLOW TEST:43.139 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":155,"skipped":2619,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:46:37.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:46:37.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1124" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":156,"skipped":2622,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:46:37.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0128 00:46:48.604769       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 00:46:48.604: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:46:48.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9085" for this suite.

• [SLOW TEST:10.802 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":157,"skipped":2622,"failed":0}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:46:48.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 28 00:46:56.707: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:46:57.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4359" for this suite.

• [SLOW TEST:9.146 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":158,"skipped":2625,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:46:57.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Jan 28 00:46:57.893: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Jan 28 00:46:58.993: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 28 00:47:01.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:03.333: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:05.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:07.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:09.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:11.286: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769219, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769218, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:47:14.332: INFO: Waited 1.03351314s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:47:14.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4736" for this suite.

• [SLOW TEST:17.293 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":159,"skipped":2635,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:47:15.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 28 00:47:15.231: INFO: Waiting up to 5m0s for pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4" in namespace "emptydir-5804" to be "success or failure"
Jan 28 00:47:15.241: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.727999ms
Jan 28 00:47:17.247: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01591727s
Jan 28 00:47:19.253: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022808182s
Jan 28 00:47:21.310: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079257911s
Jan 28 00:47:23.318: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08758486s
Jan 28 00:47:25.324: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093798282s
STEP: Saw pod success
Jan 28 00:47:25.324: INFO: Pod "pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4" satisfied condition "success or failure"
Jan 28 00:47:25.328: INFO: Trying to get logs from node jerma-node pod pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4 container test-container: 
STEP: delete the pod
Jan 28 00:47:25.603: INFO: Waiting for pod pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4 to disappear
Jan 28 00:47:25.613: INFO: Pod pod-04bfa830-3a54-401e-acbf-0bfd1ac741f4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:47:25.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5804" for this suite.

• [SLOW TEST:10.571 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2652,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:47:25.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-153a9882-aec2-4708-81e7-a8eb8803fe37
STEP: Creating configMap with name cm-test-opt-upd-8dcc9074-edc4-4156-a33e-f2a4a4dc5d22
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-153a9882-aec2-4708-81e7-a8eb8803fe37
STEP: Updating configmap cm-test-opt-upd-8dcc9074-edc4-4156-a33e-f2a4a4dc5d22
STEP: Creating configMap with name cm-test-opt-create-35753bdd-d798-4580-b57c-8615e1382122
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:48:57.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4763" for this suite.

• [SLOW TEST:91.493 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":161,"skipped":2674,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:48:57.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:48:57.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:49:07.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6999" for this suite.

• [SLOW TEST:10.201 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":162,"skipped":2685,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:49:07.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 28 00:49:07.443: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784206 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:07.443: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784206 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 28 00:49:17.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784240 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:17.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784240 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 28 00:49:27.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784264 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:27.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784264 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 28 00:49:37.500: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784288 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:37.500: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-a 658f3f97-fe2e-46ec-a53c-df5251bcfcee 4784288 0 2020-01-28 00:49:07 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 28 00:49:47.521: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-b a0f92de4-0e86-447b-a64b-dde71f63f92e 4784315 0 2020-01-28 00:49:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:47.522: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-b a0f92de4-0e86-447b-a64b-dde71f63f92e 4784315 0 2020-01-28 00:49:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 28 00:49:57.535: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-b a0f92de4-0e86-447b-a64b-dde71f63f92e 4784341 0 2020-01-28 00:49:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 00:49:57.535: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1778 /api/v1/namespaces/watch-1778/configmaps/e2e-watch-test-configmap-b a0f92de4-0e86-447b-a64b-dde71f63f92e 4784341 0 2020-01-28 00:49:47 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:50:07.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1778" for this suite.

• [SLOW TEST:60.245 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":163,"skipped":2693,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:50:07.580: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:50:16.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-968" for this suite.

• [SLOW TEST:9.199 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":164,"skipped":2712,"failed":0}
SSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:50:16.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:50:24.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3352" for this suite.

• [SLOW TEST:8.139 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":165,"skipped":2716,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:50:24.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-1152fc96-c81e-49b7-a13a-841cf40d7018
STEP: Creating secret with name s-test-opt-upd-07443238-b3de-4b23-a0ec-c0929172ff68
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1152fc96-c81e-49b7-a13a-841cf40d7018
STEP: Updating secret s-test-opt-upd-07443238-b3de-4b23-a0ec-c0929172ff68
STEP: Creating secret with name s-test-opt-create-c2fc5c75-45eb-4af0-88fe-bc6794dff154
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:50:37.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4103" for this suite.

• [SLOW TEST:12.433 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":166,"skipped":2743,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:50:37.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:51:37.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1451" for this suite.

• [SLOW TEST:60.156 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":167,"skipped":2792,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:51:37.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:51:37.623: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a" in namespace "projected-4184" to be "success or failure"
Jan 28 00:51:37.626: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.808973ms
Jan 28 00:51:39.633: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009553869s
Jan 28 00:51:41.642: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018516293s
Jan 28 00:51:43.647: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023839119s
Jan 28 00:51:45.654: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.031251857s
STEP: Saw pod success
Jan 28 00:51:45.654: INFO: Pod "downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a" satisfied condition "success or failure"
Jan 28 00:51:45.662: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a container client-container: 
STEP: delete the pod
Jan 28 00:51:46.434: INFO: Waiting for pod downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a to disappear
Jan 28 00:51:46.492: INFO: Pod downwardapi-volume-cba9669c-7957-48db-a173-47964cca3b3a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:51:46.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4184" for this suite.

• [SLOW TEST:9.010 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":168,"skipped":2799,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:51:46.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:51:47.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:51:49.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:51:51.774: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:51:53.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769507, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:51:56.797: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:51:56.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5172-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:51:57.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3256" for this suite.
STEP: Destroying namespace "webhook-3256-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.676 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":169,"skipped":2808,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:51:58.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-6ce5d503-0f12-4f1e-8b12-4e1b005fd22d
STEP: Creating a pod to test consume secrets
Jan 28 00:51:58.296: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b" in namespace "projected-9785" to be "success or failure"
Jan 28 00:51:58.315: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.804041ms
Jan 28 00:52:00.323: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026787952s
Jan 28 00:52:02.329: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033091507s
Jan 28 00:52:04.335: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039306814s
Jan 28 00:52:06.344: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048201882s
Jan 28 00:52:08.356: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059548096s
STEP: Saw pod success
Jan 28 00:52:08.356: INFO: Pod "pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b" satisfied condition "success or failure"
Jan 28 00:52:08.363: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 00:52:08.410: INFO: Waiting for pod pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b to disappear
Jan 28 00:52:08.421: INFO: Pod pod-projected-secrets-57a1bf3e-3aa1-4cdc-a2a5-a717ec2d224b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:52:08.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9785" for this suite.

• [SLOW TEST:10.235 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2836,"failed":0}
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:52:08.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-7937248f-4a33-4c83-8e3f-013c181fc428 in namespace container-probe-3862
Jan 28 00:52:14.554: INFO: Started pod busybox-7937248f-4a33-4c83-8e3f-013c181fc428 in namespace container-probe-3862
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 00:52:14.557: INFO: Initial restart count of pod busybox-7937248f-4a33-4c83-8e3f-013c181fc428 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:16.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3862" for this suite.

• [SLOW TEST:247.652 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2836,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:16.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:56:16.181: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:17.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7618" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":172,"skipped":2843,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:17.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:17.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9080" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":173,"skipped":2863,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:17.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 00:56:26.657: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:26.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3948" for this suite.

• [SLOW TEST:9.437 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":174,"skipped":2867,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:26.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:56:26.938: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4" in namespace "projected-3981" to be "success or failure"
Jan 28 00:56:26.943: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.712163ms
Jan 28 00:56:28.949: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011644175s
Jan 28 00:56:30.956: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01822764s
Jan 28 00:56:32.967: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029174025s
Jan 28 00:56:34.976: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037914846s
Jan 28 00:56:36.998: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.0605592s
STEP: Saw pod success
Jan 28 00:56:36.998: INFO: Pod "downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4" satisfied condition "success or failure"
Jan 28 00:56:37.005: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4 container client-container: 
STEP: delete the pod
Jan 28 00:56:37.063: INFO: Waiting for pod downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4 to disappear
Jan 28 00:56:37.106: INFO: Pod downwardapi-volume-ffd1bd7f-8846-4e57-81cf-b6652ed696f4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:37.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3981" for this suite.

• [SLOW TEST:10.321 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":175,"skipped":2882,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:37.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0128 00:56:40.071053       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 00:56:40.071: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:40.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1668" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":176,"skipped":2887,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:40.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 28 00:56:40.295: INFO: Waiting up to 5m0s for pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add" in namespace "emptydir-1602" to be "success or failure"
Jan 28 00:56:40.333: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 38.078172ms
Jan 28 00:56:42.967: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 2.672840101s
Jan 28 00:56:45.133: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 4.838602062s
Jan 28 00:56:47.648: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 7.353065276s
Jan 28 00:56:49.654: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 9.359505646s
Jan 28 00:56:51.661: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Pending", Reason="", readiness=false. Elapsed: 11.366439836s
Jan 28 00:56:53.668: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.373058068s
STEP: Saw pod success
Jan 28 00:56:53.668: INFO: Pod "pod-76c76f23-c74e-4e5a-92d2-44045e2e3add" satisfied condition "success or failure"
Jan 28 00:56:53.673: INFO: Trying to get logs from node jerma-node pod pod-76c76f23-c74e-4e5a-92d2-44045e2e3add container test-container: 
STEP: delete the pod
Jan 28 00:56:53.758: INFO: Waiting for pod pod-76c76f23-c74e-4e5a-92d2-44045e2e3add to disappear
Jan 28 00:56:53.766: INFO: Pod pod-76c76f23-c74e-4e5a-92d2-44045e2e3add no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:53.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1602" for this suite.

• [SLOW TEST:13.707 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2896,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:53.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:56:53.942: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:56:55.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8686" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":178,"skipped":2899,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:56:55.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-38aad2e3-9ba7-4ac2-a6e1-32f75a6375be
STEP: Creating a pod to test consume configMaps
Jan 28 00:56:55.488: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662" in namespace "projected-1174" to be "success or failure"
Jan 28 00:56:55.497: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662": Phase="Pending", Reason="", readiness=false. Elapsed: 8.642665ms
Jan 28 00:56:57.503: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014092875s
Jan 28 00:56:59.580: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662": Phase="Pending", Reason="", readiness=false. Elapsed: 4.091922738s
Jan 28 00:57:01.597: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108445005s
Jan 28 00:57:03.607: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11855625s
STEP: Saw pod success
Jan 28 00:57:03.607: INFO: Pod "pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662" satisfied condition "success or failure"
Jan 28 00:57:03.613: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 00:57:03.751: INFO: Waiting for pod pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662 to disappear
Jan 28 00:57:03.769: INFO: Pod pod-projected-configmaps-ef78bd6b-f280-4b4f-8d03-8bc9033e7662 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:57:03.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1174" for this suite.

• [SLOW TEST:8.420 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2918,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:57:03.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:57:09.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3294" for this suite.

• [SLOW TEST:5.350 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":180,"skipped":2921,"failed":0}
SSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:57:09.138: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:57:09.284: INFO: Creating deployment "test-recreate-deployment"
Jan 28 00:57:09.300: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 28 00:57:09.360: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 28 00:57:11.381: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 28 00:57:11.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:13.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:15.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769829, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:17.391: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 28 00:57:17.403: INFO: Updating deployment test-recreate-deployment
Jan 28 00:57:17.403: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 28 00:57:17.665: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-3220 /apis/apps/v1/namespaces/deployment-3220/deployments/test-recreate-deployment d2b4fad7-a3d4-4d14-bc7c-128c953add02 4785959 2 2020-01-28 00:57:09 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0028a4068  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-28 00:57:17 +0000 UTC,LastTransitionTime:2020-01-28 00:57:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-28 00:57:17 +0000 UTC,LastTransitionTime:2020-01-28 00:57:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 28 00:57:17.672: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-3220 /apis/apps/v1/namespaces/deployment-3220/replicasets/test-recreate-deployment-5f94c574ff 975a9a1e-f945-481a-ac00-98c53d2223db 4785958 1 2020-01-28 00:57:17 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment d2b4fad7-a3d4-4d14-bc7c-128c953add02 0xc003b1fcc7 0xc003b1fcc8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b1fd38  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:57:17.672: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 28 00:57:17.673: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-3220 /apis/apps/v1/namespaces/deployment-3220/replicasets/test-recreate-deployment-799c574856 9368f5fe-58da-4c0a-b0d4-21d62b168a97 4785950 2 2020-01-28 00:57:09 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment d2b4fad7-a3d4-4d14-bc7c-128c953add02 0xc003b1ff17 0xc003b1ff18}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b1ff88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 00:57:17.707: INFO: Pod "test-recreate-deployment-5f94c574ff-wc5rg" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-wc5rg test-recreate-deployment-5f94c574ff- deployment-3220 /api/v1/namespaces/deployment-3220/pods/test-recreate-deployment-5f94c574ff-wc5rg a2a5cdd4-519c-4d68-8903-1a9e91326055 4785961 0 2020-01-28 00:57:17 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 975a9a1e-f945-481a-ac00-98c53d2223db 0xc0032145e7 0xc0032145e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xwgn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xwgn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xwgn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:57:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:57:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:57:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 00:57:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-28 00:57:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:57:17.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3220" for this suite.

• [SLOW TEST:8.579 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":181,"skipped":2926,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:57:17.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0128 00:57:20.619528       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 00:57:20.619: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:57:20.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4260" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":182,"skipped":2931,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:57:20.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:57:22.499: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:57:25.519: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:27.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:29.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:31.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:57:33.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769842, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:57:36.572: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 00:57:36.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:57:37.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-752" for this suite.
STEP: Destroying namespace "webhook-752-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.366 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":183,"skipped":2935,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:57:38.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:58:28.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9157" for this suite.

• [SLOW TEST:50.286 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":184,"skipped":2955,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:58:28.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:58:29.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:58:31.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:33.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:35.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:37.159: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769909, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:58:40.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:58:40.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7037" for this suite.
STEP: Destroying namespace "webhook-7037-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.297 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":185,"skipped":2964,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:58:40.594: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:58:41.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:43.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:45.777: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:47.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:49.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:58:51.778: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769921, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:58:54.830: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:58:55.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8677" for this suite.
STEP: Destroying namespace "webhook-8677-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.641 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":186,"skipped":2975,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:58:55.236: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 00:58:55.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e" in namespace "downward-api-5390" to be "success or failure"
Jan 28 00:58:55.390: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.669774ms
Jan 28 00:58:57.401: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017646147s
Jan 28 00:58:59.405: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022355377s
Jan 28 00:59:01.415: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031704633s
Jan 28 00:59:03.421: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.038045933s
Jan 28 00:59:05.432: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048910655s
STEP: Saw pod success
Jan 28 00:59:05.432: INFO: Pod "downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e" satisfied condition "success or failure"
Jan 28 00:59:05.437: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e container client-container: 
STEP: delete the pod
Jan 28 00:59:05.501: INFO: Waiting for pod downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e to disappear
Jan 28 00:59:05.594: INFO: Pod downwardapi-volume-578c6102-f10c-40c2-89ed-7cb55333011e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:59:05.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5390" for this suite.

• [SLOW TEST:10.384 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":3007,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:59:05.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 28 00:59:21.884: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 00:59:21.956: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 00:59:23.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 00:59:23.967: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 00:59:25.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 00:59:25.964: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 28 00:59:27.957: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 28 00:59:27.964: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:59:27.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7847" for this suite.

• [SLOW TEST:22.355 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":3062,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:59:27.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 00:59:29.119: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 00:59:31.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:59:33.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:59:35.267: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 00:59:37.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715769969, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 00:59:40.174: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:59:50.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6313" for this suite.
STEP: Destroying namespace "webhook-6313-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.586 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":189,"skipped":3083,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:59:50.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-05b41efb-2f3f-4674-8bca-59498c29e038
STEP: Creating a pod to test consume configMaps
Jan 28 00:59:50.678: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca" in namespace "projected-8554" to be "success or failure"
Jan 28 00:59:50.716: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca": Phase="Pending", Reason="", readiness=false. Elapsed: 37.24034ms
Jan 28 00:59:52.724: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045572461s
Jan 28 00:59:54.731: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052692095s
Jan 28 00:59:56.738: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060101304s
Jan 28 00:59:58.745: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066265914s
STEP: Saw pod success
Jan 28 00:59:58.745: INFO: Pod "pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca" satisfied condition "success or failure"
Jan 28 00:59:58.749: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 00:59:58.804: INFO: Waiting for pod pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca to disappear
Jan 28 00:59:58.807: INFO: Pod pod-projected-configmaps-c519fcf8-08b9-4ca7-89b8-281e994d15ca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 00:59:58.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8554" for this suite.

• [SLOW TEST:8.248 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":3124,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 00:59:58.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-164da694-2f6c-4031-81ba-7b296e6fa13f in namespace container-probe-2482
Jan 28 01:00:07.092: INFO: Started pod liveness-164da694-2f6c-4031-81ba-7b296e6fa13f in namespace container-probe-2482
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 01:00:07.096: INFO: Initial restart count of pod liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is 0
Jan 28 01:00:27.345: INFO: Restart count of pod container-probe-2482/liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is now 1 (20.248599917s elapsed)
Jan 28 01:00:47.516: INFO: Restart count of pod container-probe-2482/liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is now 2 (40.419581313s elapsed)
Jan 28 01:01:07.583: INFO: Restart count of pod container-probe-2482/liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is now 3 (1m0.486779135s elapsed)
Jan 28 01:01:27.647: INFO: Restart count of pod container-probe-2482/liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is now 4 (1m20.550329835s elapsed)
Jan 28 01:02:28.014: INFO: Restart count of pod container-probe-2482/liveness-164da694-2f6c-4031-81ba-7b296e6fa13f is now 5 (2m20.917442296s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:02:28.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2482" for this suite.

• [SLOW TEST:149.362 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":191,"skipped":3129,"failed":0}
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:02:28.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:02:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2470" for this suite.

• [SLOW TEST:12.260 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3131,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:02:40.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-c65816f2-54cd-4cd1-95d9-b36eb39705ed
STEP: Creating a pod to test consume configMaps
Jan 28 01:02:40.576: INFO: Waiting up to 5m0s for pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0" in namespace "configmap-6751" to be "success or failure"
Jan 28 01:02:40.608: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 31.854188ms
Jan 28 01:02:42.616: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040399389s
Jan 28 01:02:44.622: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046175524s
Jan 28 01:02:46.628: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052584667s
Jan 28 01:02:48.666: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.09052739s
STEP: Saw pod success
Jan 28 01:02:48.666: INFO: Pod "pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0" satisfied condition "success or failure"
Jan 28 01:02:48.680: INFO: Trying to get logs from node jerma-node pod pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0 container configmap-volume-test: 
STEP: delete the pod
Jan 28 01:02:48.784: INFO: Waiting for pod pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0 to disappear
Jan 28 01:02:48.806: INFO: Pod pod-configmaps-81a03eae-8adc-4e34-b602-66243880d1a0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:02:48.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6751" for this suite.

• [SLOW TEST:8.375 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":3136,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:02:48.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:02:48.877: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.583715ms)
Jan 28 01:02:48.882: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.427744ms)
Jan 28 01:02:48.910: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 27.738117ms)
Jan 28 01:02:48.916: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.826508ms)
Jan 28 01:02:48.921: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.825203ms)
Jan 28 01:02:48.926: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.660081ms)
Jan 28 01:02:48.931: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.876986ms)
Jan 28 01:02:48.936: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.065173ms)
Jan 28 01:02:48.940: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.598748ms)
Jan 28 01:02:48.943: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.71561ms)
Jan 28 01:02:48.947: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.810762ms)
Jan 28 01:02:48.953: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.25255ms)
Jan 28 01:02:48.957: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.710147ms)
Jan 28 01:02:48.961: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.807441ms)
Jan 28 01:02:48.964: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.214124ms)
Jan 28 01:02:48.968: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.644439ms)
Jan 28 01:02:48.971: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.275187ms)
Jan 28 01:02:48.974: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.891709ms)
Jan 28 01:02:48.977: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 2.802762ms)
Jan 28 01:02:48.980: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 3.256572ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:02:48.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9941" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":194,"skipped":3138,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:02:48.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Jan 28 01:02:49.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1719'
Jan 28 01:02:51.325: INFO: stderr: ""
Jan 28 01:02:51.326: INFO: stdout: "pod/pause created\n"
Jan 28 01:02:51.326: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan 28 01:02:51.326: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1719" to be "running and ready"
Jan 28 01:02:51.383: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 57.450993ms
Jan 28 01:02:53.388: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062464161s
Jan 28 01:02:55.395: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06959732s
Jan 28 01:02:57.403: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.077283381s
Jan 28 01:02:57.403: INFO: Pod "pause" satisfied condition "running and ready"
Jan 28 01:02:57.403: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Jan 28 01:02:57.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1719'
Jan 28 01:02:57.597: INFO: stderr: ""
Jan 28 01:02:57.597: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan 28 01:02:57.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1719'
Jan 28 01:02:57.781: INFO: stderr: ""
Jan 28 01:02:57.781: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan 28 01:02:57.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1719'
Jan 28 01:02:57.901: INFO: stderr: ""
Jan 28 01:02:57.901: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan 28 01:02:57.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1719'
Jan 28 01:02:58.023: INFO: stderr: ""
Jan 28 01:02:58.023: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          7s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Jan 28 01:02:58.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1719'
Jan 28 01:02:58.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:02:58.147: INFO: stdout: "pod \"pause\" force deleted\n"
Jan 28 01:02:58.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1719'
Jan 28 01:02:58.253: INFO: stderr: "No resources found in kubectl-1719 namespace.\n"
Jan 28 01:02:58.253: INFO: stdout: ""
Jan 28 01:02:58.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1719 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 01:02:58.393: INFO: stderr: ""
Jan 28 01:02:58.393: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:02:58.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1719" for this suite.

• [SLOW TEST:9.420 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":280,"completed":195,"skipped":3146,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:02:58.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 01:03:14.740: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 01:03:14.749: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 01:03:16.749: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 01:03:16.756: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 01:03:18.749: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 01:03:18.755: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 01:03:20.749: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 01:03:20.796: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 28 01:03:22.749: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 28 01:03:22.755: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:03:22.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1313" for this suite.

• [SLOW TEST:24.464 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":196,"skipped":3149,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:03:22.873: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:03:23.549: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 01:03:25.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:03:27.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:03:29.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770203, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:03:32.598: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jan 28 01:03:40.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-8229 to-be-attached-pod -i -c=container1'
Jan 28 01:03:40.943: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:03:40.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8229" for this suite.
STEP: Destroying namespace "webhook-8229-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:18.332 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":197,"skipped":3165,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:03:41.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Jan 28 01:03:41.320: INFO: Waiting up to 5m0s for pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67" in namespace "var-expansion-9943" to be "success or failure"
Jan 28 01:03:41.333: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 12.43937ms
Jan 28 01:03:43.352: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031807813s
Jan 28 01:03:45.362: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041933248s
Jan 28 01:03:47.369: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048655041s
Jan 28 01:03:49.375: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055165482s
Jan 28 01:03:51.384: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Pending", Reason="", readiness=false. Elapsed: 10.064145402s
Jan 28 01:03:53.401: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.080864318s
STEP: Saw pod success
Jan 28 01:03:53.401: INFO: Pod "var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67" satisfied condition "success or failure"
Jan 28 01:03:53.406: INFO: Trying to get logs from node jerma-node pod var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67 container dapi-container: 
STEP: delete the pod
Jan 28 01:03:53.458: INFO: Waiting for pod var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67 to disappear
Jan 28 01:03:53.467: INFO: Pod var-expansion-e7425a50-f989-4970-b41f-7a84dc5fea67 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:03:53.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9943" for this suite.

• [SLOW TEST:12.341 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":198,"skipped":3178,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:03:53.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 28 01:03:53.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 28 01:04:06.257: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 01:04:09.895: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:04:22.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5027" for this suite.

• [SLOW TEST:28.932 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":199,"skipped":3217,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:04:22.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:04:30.724: INFO: Waiting up to 5m0s for pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab" in namespace "pods-9325" to be "success or failure"
Jan 28 01:04:30.740: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab": Phase="Pending", Reason="", readiness=false. Elapsed: 15.906198ms
Jan 28 01:04:32.747: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023195823s
Jan 28 01:04:34.752: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028390312s
Jan 28 01:04:36.760: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036196596s
Jan 28 01:04:38.772: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048828604s
STEP: Saw pod success
Jan 28 01:04:38.773: INFO: Pod "client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab" satisfied condition "success or failure"
Jan 28 01:04:38.777: INFO: Trying to get logs from node jerma-node pod client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab container env3cont: 
STEP: delete the pod
Jan 28 01:04:38.843: INFO: Waiting for pod client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab to disappear
Jan 28 01:04:38.874: INFO: Pod client-envvars-a26232c2-5c1e-46df-afe4-5e14e82292ab no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:04:38.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9325" for this suite.

• [SLOW TEST:16.412 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":200,"skipped":3229,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:04:38.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 01:04:38.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9" in namespace "downward-api-518" to be "success or failure"
Jan 28 01:04:38.999: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Pending", Reason="", readiness=false. Elapsed: 28.592966ms
Jan 28 01:04:41.004: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034294105s
Jan 28 01:04:43.010: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039617415s
Jan 28 01:04:45.129: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158701161s
Jan 28 01:04:47.135: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.164799184s
Jan 28 01:04:49.141: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.171105317s
STEP: Saw pod success
Jan 28 01:04:49.141: INFO: Pod "downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9" satisfied condition "success or failure"
Jan 28 01:04:49.144: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9 container client-container: 
STEP: delete the pod
Jan 28 01:04:49.406: INFO: Waiting for pod downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9 to disappear
Jan 28 01:04:49.415: INFO: Pod downwardapi-volume-7c498027-88e3-4abb-bf85-4cc8f3ef36a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:04:49.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-518" for this suite.

• [SLOW TEST:10.536 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3253,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:04:49.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Jan 28 01:04:49.559: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 28 01:04:49.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:50.075: INFO: stderr: ""
Jan 28 01:04:50.075: INFO: stdout: "service/agnhost-slave created\n"
Jan 28 01:04:50.076: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 28 01:04:50.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:50.560: INFO: stderr: ""
Jan 28 01:04:50.560: INFO: stdout: "service/agnhost-master created\n"
Jan 28 01:04:50.561: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 28 01:04:50.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:51.043: INFO: stderr: ""
Jan 28 01:04:51.043: INFO: stdout: "service/frontend created\n"
Jan 28 01:04:51.044: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 28 01:04:51.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:51.449: INFO: stderr: ""
Jan 28 01:04:51.449: INFO: stdout: "deployment.apps/frontend created\n"
Jan 28 01:04:51.449: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 28 01:04:51.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:51.980: INFO: stderr: ""
Jan 28 01:04:51.980: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 28 01:04:51.980: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 28 01:04:51.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2915'
Jan 28 01:04:53.096: INFO: stderr: ""
Jan 28 01:04:53.096: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 28 01:04:53.096: INFO: Waiting for all frontend pods to be Running.
Jan 28 01:05:13.147: INFO: Waiting for frontend to serve content.
Jan 28 01:05:13.170: INFO: Trying to add a new entry to the guestbook.
Jan 28 01:05:13.183: INFO: Verifying that added entry can be retrieved.
Jan 28 01:05:13.199: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 28 01:05:18.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:18.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:18.497: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 01:05:18.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:18.675: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:18.675: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 01:05:18.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:18.918: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:18.918: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 01:05:18.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:19.083: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:19.083: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 01:05:19.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:19.215: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:19.215: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 28 01:05:19.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2915'
Jan 28 01:05:19.331: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:05:19.331: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:05:19.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2915" for this suite.

• [SLOW TEST:29.936 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":280,"completed":202,"skipped":3267,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:05:19.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 28 01:05:19.627: INFO: Waiting up to 5m0s for pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6" in namespace "downward-api-3117" to be "success or failure"
Jan 28 01:05:19.645: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.887129ms
Jan 28 01:05:21.702: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073737494s
Jan 28 01:05:24.040: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.412273142s
Jan 28 01:05:26.092: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.463705539s
Jan 28 01:05:28.099: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.470750555s
Jan 28 01:05:30.106: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.478090881s
Jan 28 01:05:32.121: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.493579234s
Jan 28 01:05:34.128: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.500038572s
STEP: Saw pod success
Jan 28 01:05:34.128: INFO: Pod "downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6" satisfied condition "success or failure"
Jan 28 01:05:34.131: INFO: Trying to get logs from node jerma-node pod downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6 container dapi-container: 
STEP: delete the pod
Jan 28 01:05:34.182: INFO: Waiting for pod downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6 to disappear
Jan 28 01:05:34.215: INFO: Pod downward-api-adc0acf6-4ba9-4cae-bdfc-1350233d75a6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:05:34.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3117" for this suite.

• [SLOW TEST:14.856 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":203,"skipped":3298,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:05:34.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 28 01:05:34.311: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:05:52.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2066" for this suite.

• [SLOW TEST:18.145 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3308,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:05:52.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-4a645dd7-6381-4003-a8be-17f9fa2d6bd0
STEP: Creating secret with name secret-projected-all-test-volume-2912d645-0632-45ea-abc6-53872a32db76
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 28 01:05:52.542: INFO: Waiting up to 5m0s for pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e" in namespace "projected-5158" to be "success or failure"
Jan 28 01:05:52.550: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.614126ms
Jan 28 01:05:54.566: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02348348s
Jan 28 01:05:56.573: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030911052s
Jan 28 01:05:58.583: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040441736s
Jan 28 01:06:00.592: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.049652486s
STEP: Saw pod success
Jan 28 01:06:00.592: INFO: Pod "projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e" satisfied condition "success or failure"
Jan 28 01:06:00.604: INFO: Trying to get logs from node jerma-node pod projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e container projected-all-volume-test: 
STEP: delete the pod
Jan 28 01:06:00.797: INFO: Waiting for pod projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e to disappear
Jan 28 01:06:00.815: INFO: Pod projected-volume-e41331a0-bd3a-4ed8-a6f7-ad04d6a9e20e no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:00.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5158" for this suite.

• [SLOW TEST:8.454 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3329,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:00.826: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 28 01:06:09.223: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:09.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6480" for this suite.

• [SLOW TEST:8.492 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":206,"skipped":3331,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:09.319: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:06:09.526: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e" in namespace "security-context-test-6699" to be "success or failure"
Jan 28 01:06:09.533: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.992347ms
Jan 28 01:06:11.539: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013160984s
Jan 28 01:06:13.545: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019872684s
Jan 28 01:06:15.551: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025659612s
Jan 28 01:06:17.559: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.033637129s
Jan 28 01:06:17.559: INFO: Pod "busybox-readonly-false-732188ef-e2b0-4210-8080-30ab9392068e" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:17.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6699" for this suite.

• [SLOW TEST:8.257 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":207,"skipped":3341,"failed":0}
SSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:17.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 28 01:06:17.917: INFO: Created pod &Pod{ObjectMeta:{dns-6048  dns-6048 /api/v1/namespaces/dns-6048/pods/dns-6048 f31ae921-b65a-4042-b337-ed257797ccf5 4788378 0 2020-01-28 01:06:17 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s4brx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s4brx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s4brx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 28 01:06:17.927: INFO: The status of Pod dns-6048 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:06:19.934: INFO: The status of Pod dns-6048 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:06:21.933: INFO: The status of Pod dns-6048 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:06:23.935: INFO: The status of Pod dns-6048 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:06:25.935: INFO: The status of Pod dns-6048 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 28 01:06:25.935: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6048 PodName:dns-6048 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:06:25.935: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:06:25.996644       9 log.go:172] (0xc0026da000) (0xc001bfe000) Create stream
I0128 01:06:25.996726       9 log.go:172] (0xc0026da000) (0xc001bfe000) Stream added, broadcasting: 1
I0128 01:06:26.000633       9 log.go:172] (0xc0026da000) Reply frame received for 1
I0128 01:06:26.000687       9 log.go:172] (0xc0026da000) (0xc0021e8140) Create stream
I0128 01:06:26.000712       9 log.go:172] (0xc0026da000) (0xc0021e8140) Stream added, broadcasting: 3
I0128 01:06:26.002380       9 log.go:172] (0xc0026da000) Reply frame received for 3
I0128 01:06:26.002455       9 log.go:172] (0xc0026da000) (0xc001bfe280) Create stream
I0128 01:06:26.002468       9 log.go:172] (0xc0026da000) (0xc001bfe280) Stream added, broadcasting: 5
I0128 01:06:26.004913       9 log.go:172] (0xc0026da000) Reply frame received for 5
I0128 01:06:26.114087       9 log.go:172] (0xc0026da000) Data frame received for 3
I0128 01:06:26.114150       9 log.go:172] (0xc0021e8140) (3) Data frame handling
I0128 01:06:26.114167       9 log.go:172] (0xc0021e8140) (3) Data frame sent
I0128 01:06:26.202686       9 log.go:172] (0xc0026da000) Data frame received for 1
I0128 01:06:26.202847       9 log.go:172] (0xc0026da000) (0xc001bfe280) Stream removed, broadcasting: 5
I0128 01:06:26.202884       9 log.go:172] (0xc001bfe000) (1) Data frame handling
I0128 01:06:26.202899       9 log.go:172] (0xc001bfe000) (1) Data frame sent
I0128 01:06:26.202927       9 log.go:172] (0xc0026da000) (0xc0021e8140) Stream removed, broadcasting: 3
I0128 01:06:26.202980       9 log.go:172] (0xc0026da000) (0xc001bfe000) Stream removed, broadcasting: 1
I0128 01:06:26.202993       9 log.go:172] (0xc0026da000) Go away received
I0128 01:06:26.203504       9 log.go:172] (0xc0026da000) (0xc001bfe000) Stream removed, broadcasting: 1
I0128 01:06:26.203515       9 log.go:172] (0xc0026da000) (0xc0021e8140) Stream removed, broadcasting: 3
I0128 01:06:26.203522       9 log.go:172] (0xc0026da000) (0xc001bfe280) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 28 01:06:26.203: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6048 PodName:dns-6048 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:06:26.203: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:06:26.240523       9 log.go:172] (0xc0021c6000) (0xc001802280) Create stream
I0128 01:06:26.240596       9 log.go:172] (0xc0021c6000) (0xc001802280) Stream added, broadcasting: 1
I0128 01:06:26.244695       9 log.go:172] (0xc0021c6000) Reply frame received for 1
I0128 01:06:26.244734       9 log.go:172] (0xc0021c6000) (0xc0021e85a0) Create stream
I0128 01:06:26.244745       9 log.go:172] (0xc0021c6000) (0xc0021e85a0) Stream added, broadcasting: 3
I0128 01:06:26.245765       9 log.go:172] (0xc0021c6000) Reply frame received for 3
I0128 01:06:26.245808       9 log.go:172] (0xc0021c6000) (0xc0018023c0) Create stream
I0128 01:06:26.245828       9 log.go:172] (0xc0021c6000) (0xc0018023c0) Stream added, broadcasting: 5
I0128 01:06:26.246974       9 log.go:172] (0xc0021c6000) Reply frame received for 5
I0128 01:06:26.344759       9 log.go:172] (0xc0021c6000) Data frame received for 3
I0128 01:06:26.344895       9 log.go:172] (0xc0021e85a0) (3) Data frame handling
I0128 01:06:26.344941       9 log.go:172] (0xc0021e85a0) (3) Data frame sent
I0128 01:06:26.410020       9 log.go:172] (0xc0021c6000) Data frame received for 1
I0128 01:06:26.410098       9 log.go:172] (0xc001802280) (1) Data frame handling
I0128 01:06:26.410119       9 log.go:172] (0xc001802280) (1) Data frame sent
I0128 01:06:26.410135       9 log.go:172] (0xc0021c6000) (0xc001802280) Stream removed, broadcasting: 1
I0128 01:06:26.410574       9 log.go:172] (0xc0021c6000) (0xc0021e85a0) Stream removed, broadcasting: 3
I0128 01:06:26.410787       9 log.go:172] (0xc0021c6000) (0xc0018023c0) Stream removed, broadcasting: 5
I0128 01:06:26.410817       9 log.go:172] (0xc0021c6000) (0xc001802280) Stream removed, broadcasting: 1
I0128 01:06:26.410829       9 log.go:172] (0xc0021c6000) (0xc0021e85a0) Stream removed, broadcasting: 3
I0128 01:06:26.410837       9 log.go:172] (0xc0021c6000) (0xc0018023c0) Stream removed, broadcasting: 5
Jan 28 01:06:26.411: INFO: Deleting pod dns-6048...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:26.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6048" for this suite.

• [SLOW TEST:8.949 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":208,"skipped":3345,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:26.528: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 28 01:06:26.695: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 28 01:06:31.746: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:32.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8870" for this suite.

• [SLOW TEST:6.257 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":209,"skipped":3349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:32.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-af89b89c-f611-4649-a65b-4e63f5988114
STEP: Creating a pod to test consume secrets
Jan 28 01:06:32.945: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140" in namespace "projected-5877" to be "success or failure"
Jan 28 01:06:32.960: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 14.542879ms
Jan 28 01:06:34.968: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02259684s
Jan 28 01:06:36.983: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037770967s
Jan 28 01:06:38.990: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044857055s
Jan 28 01:06:40.996: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051209366s
Jan 28 01:06:43.003: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057520271s
Jan 28 01:06:45.007: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.061877407s
STEP: Saw pod success
Jan 28 01:06:45.007: INFO: Pod "pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140" satisfied condition "success or failure"
Jan 28 01:06:45.010: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140 container projected-secret-volume-test: 
STEP: delete the pod
Jan 28 01:06:45.043: INFO: Waiting for pod pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140 to disappear
Jan 28 01:06:45.127: INFO: Pod pod-projected-secrets-c19336fd-74ce-41f7-ae93-c4a8b3e64140 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:06:45.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5877" for this suite.

• [SLOW TEST:12.353 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3383,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:06:45.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:06:45.297: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 01:06:45.326: INFO: Number of nodes with available pods: 0
Jan 28 01:06:45.326: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:46.787: INFO: Number of nodes with available pods: 0
Jan 28 01:06:46.787: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:47.480: INFO: Number of nodes with available pods: 0
Jan 28 01:06:47.480: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:48.585: INFO: Number of nodes with available pods: 0
Jan 28 01:06:48.585: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:49.337: INFO: Number of nodes with available pods: 0
Jan 28 01:06:49.337: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:50.412: INFO: Number of nodes with available pods: 0
Jan 28 01:06:50.412: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:52.200: INFO: Number of nodes with available pods: 0
Jan 28 01:06:52.200: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:52.711: INFO: Number of nodes with available pods: 0
Jan 28 01:06:52.711: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:53.802: INFO: Number of nodes with available pods: 0
Jan 28 01:06:53.802: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:54.336: INFO: Number of nodes with available pods: 0
Jan 28 01:06:54.336: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:06:55.338: INFO: Number of nodes with available pods: 1
Jan 28 01:06:55.338: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 28 01:06:56.368: INFO: Number of nodes with available pods: 2
Jan 28 01:06:56.368: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 28 01:06:56.425: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:56.425: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:57.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:57.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:58.829: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:58.829: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:59.455: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:06:59.455: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:00.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:00.457: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:01.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:01.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:02.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:02.457: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:02.457: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:03.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:03.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:03.456: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:04.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:04.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:04.456: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:05.455: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:05.455: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:05.455: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:06.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:06.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:06.456: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:07.455: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:07.455: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:07.455: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:08.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:08.456: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:08.456: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:09.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:09.457: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:09.457: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:10.459: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:10.459: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:10.459: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:11.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:11.457: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:11.457: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:12.460: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:12.460: INFO: Wrong image for pod: daemon-set-v28k8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:12.460: INFO: Pod daemon-set-v28k8 is not available
Jan 28 01:07:13.471: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:13.471: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:14.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:14.456: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:15.495: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:15.495: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:16.460: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:16.460: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:17.472: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:17.472: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:18.650: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:18.650: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:19.471: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:19.471: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:20.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:20.457: INFO: Pod daemon-set-rtbkn is not available
Jan 28 01:07:21.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:22.462: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:23.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:24.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:25.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:25.456: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:26.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:26.457: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:27.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:27.457: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:28.460: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:28.460: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:29.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:29.456: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:30.457: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:30.457: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:31.456: INFO: Wrong image for pod: daemon-set-prz7c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 28 01:07:31.456: INFO: Pod daemon-set-prz7c is not available
Jan 28 01:07:32.457: INFO: Pod daemon-set-vss4g is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 28 01:07:32.483: INFO: Number of nodes with available pods: 1
Jan 28 01:07:32.484: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:33.496: INFO: Number of nodes with available pods: 1
Jan 28 01:07:33.496: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:34.498: INFO: Number of nodes with available pods: 1
Jan 28 01:07:34.498: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:35.497: INFO: Number of nodes with available pods: 1
Jan 28 01:07:35.497: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:36.525: INFO: Number of nodes with available pods: 1
Jan 28 01:07:36.525: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:37.495: INFO: Number of nodes with available pods: 1
Jan 28 01:07:37.495: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:07:38.511: INFO: Number of nodes with available pods: 2
Jan 28 01:07:38.511: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5817, will wait for the garbage collector to delete the pods
Jan 28 01:07:38.595: INFO: Deleting DaemonSet.extensions daemon-set took: 10.375136ms
Jan 28 01:07:39.096: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.536811ms
Jan 28 01:07:53.118: INFO: Number of nodes with available pods: 0
Jan 28 01:07:53.118: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 01:07:53.122: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5817/daemonsets","resourceVersion":"4788768"},"items":null}

Jan 28 01:07:53.126: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5817/pods","resourceVersion":"4788768"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:07:53.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5817" for this suite.

• [SLOW TEST:68.051 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":211,"skipped":3401,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:07:53.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 28 01:07:53.329: INFO: Waiting up to 5m0s for pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9" in namespace "downward-api-3459" to be "success or failure"
Jan 28 01:07:53.341: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.173464ms
Jan 28 01:07:55.347: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017622721s
Jan 28 01:07:57.356: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026418301s
Jan 28 01:07:59.363: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033189874s
Jan 28 01:08:01.370: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039906877s
STEP: Saw pod success
Jan 28 01:08:01.370: INFO: Pod "downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9" satisfied condition "success or failure"
Jan 28 01:08:01.374: INFO: Trying to get logs from node jerma-node pod downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9 container dapi-container: 
STEP: delete the pod
Jan 28 01:08:01.465: INFO: Waiting for pod downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9 to disappear
Jan 28 01:08:01.478: INFO: Pod downward-api-c688b8f1-e8fa-4f8e-b54b-7dc07447f3e9 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:08:01.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3459" for this suite.

• [SLOW TEST:8.306 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3421,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:08:01.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-80d71dd0-f855-4e19-9086-4ace6bdae582
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:08:01.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8621" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":213,"skipped":3426,"failed":0}
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:08:01.634: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Jan 28 01:08:01.746: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4637" to be "success or failure"
Jan 28 01:08:01.753: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 7.471407ms
Jan 28 01:08:03.759: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013253653s
Jan 28 01:08:05.765: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019312665s
Jan 28 01:08:07.772: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.025954782s
Jan 28 01:08:09.780: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.034234095s
Jan 28 01:08:11.786: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.040751739s
STEP: Saw pod success
Jan 28 01:08:11.787: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 28 01:08:11.791: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 28 01:08:11.903: INFO: Waiting for pod pod-host-path-test to disappear
Jan 28 01:08:11.911: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:08:11.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4637" for this suite.

• [SLOW TEST:10.293 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":214,"skipped":3427,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:08:11.928: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:08:12.775: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 01:08:14.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:08:16.804: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770492, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:08:19.828: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 28 01:08:19.884: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:08:19.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2963" for this suite.
STEP: Destroying namespace "webhook-2963-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:8.185 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":215,"skipped":3429,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:08:20.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5817
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 01:08:20.296: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 28 01:08:20.416: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:22.454: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:24.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:27.176: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:29.100: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:30.421: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:08:32.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:34.425: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:36.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:38.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:40.423: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:42.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:44.448: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:46.447: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:08:48.422: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 28 01:08:48.430: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 28 01:08:58.541: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5817 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:08:58.541: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:08:58.614742       9 log.go:172] (0xc001f20000) (0xc000be37c0) Create stream
I0128 01:08:58.614936       9 log.go:172] (0xc001f20000) (0xc000be37c0) Stream added, broadcasting: 1
I0128 01:08:58.626146       9 log.go:172] (0xc001f20000) Reply frame received for 1
I0128 01:08:58.626581       9 log.go:172] (0xc001f20000) (0xc002256960) Create stream
I0128 01:08:58.626624       9 log.go:172] (0xc001f20000) (0xc002256960) Stream added, broadcasting: 3
I0128 01:08:58.629395       9 log.go:172] (0xc001f20000) Reply frame received for 3
I0128 01:08:58.629429       9 log.go:172] (0xc001f20000) (0xc002a80280) Create stream
I0128 01:08:58.629437       9 log.go:172] (0xc001f20000) (0xc002a80280) Stream added, broadcasting: 5
I0128 01:08:58.630898       9 log.go:172] (0xc001f20000) Reply frame received for 5
I0128 01:08:58.756158       9 log.go:172] (0xc001f20000) Data frame received for 3
I0128 01:08:58.756212       9 log.go:172] (0xc002256960) (3) Data frame handling
I0128 01:08:58.756233       9 log.go:172] (0xc002256960) (3) Data frame sent
I0128 01:08:58.842919       9 log.go:172] (0xc001f20000) (0xc002256960) Stream removed, broadcasting: 3
I0128 01:08:58.842987       9 log.go:172] (0xc001f20000) Data frame received for 1
I0128 01:08:58.843005       9 log.go:172] (0xc000be37c0) (1) Data frame handling
I0128 01:08:58.843015       9 log.go:172] (0xc000be37c0) (1) Data frame sent
I0128 01:08:58.843032       9 log.go:172] (0xc001f20000) (0xc002a80280) Stream removed, broadcasting: 5
I0128 01:08:58.843045       9 log.go:172] (0xc001f20000) (0xc000be37c0) Stream removed, broadcasting: 1
I0128 01:08:58.843060       9 log.go:172] (0xc001f20000) Go away received
I0128 01:08:58.843437       9 log.go:172] (0xc001f20000) (0xc000be37c0) Stream removed, broadcasting: 1
I0128 01:08:58.843449       9 log.go:172] (0xc001f20000) (0xc002256960) Stream removed, broadcasting: 3
I0128 01:08:58.843456       9 log.go:172] (0xc001f20000) (0xc002a80280) Stream removed, broadcasting: 5
Jan 28 01:08:58.843: INFO: Found all expected endpoints: [netserver-0]
Jan 28 01:08:58.848: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5817 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:08:58.848: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:08:58.892273       9 log.go:172] (0xc0021c6420) (0xc002256dc0) Create stream
I0128 01:08:58.892525       9 log.go:172] (0xc0021c6420) (0xc002256dc0) Stream added, broadcasting: 1
I0128 01:08:58.899463       9 log.go:172] (0xc0021c6420) Reply frame received for 1
I0128 01:08:58.899557       9 log.go:172] (0xc0021c6420) (0xc000be3860) Create stream
I0128 01:08:58.899571       9 log.go:172] (0xc0021c6420) (0xc000be3860) Stream added, broadcasting: 3
I0128 01:08:58.902367       9 log.go:172] (0xc0021c6420) Reply frame received for 3
I0128 01:08:58.902413       9 log.go:172] (0xc0021c6420) (0xc002a80320) Create stream
I0128 01:08:58.902432       9 log.go:172] (0xc0021c6420) (0xc002a80320) Stream added, broadcasting: 5
I0128 01:08:58.903849       9 log.go:172] (0xc0021c6420) Reply frame received for 5
I0128 01:08:59.006386       9 log.go:172] (0xc0021c6420) Data frame received for 3
I0128 01:08:59.006430       9 log.go:172] (0xc000be3860) (3) Data frame handling
I0128 01:08:59.006451       9 log.go:172] (0xc000be3860) (3) Data frame sent
I0128 01:08:59.091424       9 log.go:172] (0xc0021c6420) Data frame received for 1
I0128 01:08:59.091486       9 log.go:172] (0xc0021c6420) (0xc000be3860) Stream removed, broadcasting: 3
I0128 01:08:59.091580       9 log.go:172] (0xc002256dc0) (1) Data frame handling
I0128 01:08:59.091602       9 log.go:172] (0xc002256dc0) (1) Data frame sent
I0128 01:08:59.091619       9 log.go:172] (0xc0021c6420) (0xc002a80320) Stream removed, broadcasting: 5
I0128 01:08:59.091662       9 log.go:172] (0xc0021c6420) (0xc002256dc0) Stream removed, broadcasting: 1
I0128 01:08:59.091683       9 log.go:172] (0xc0021c6420) Go away received
I0128 01:08:59.091949       9 log.go:172] (0xc0021c6420) (0xc002256dc0) Stream removed, broadcasting: 1
I0128 01:08:59.091961       9 log.go:172] (0xc0021c6420) (0xc000be3860) Stream removed, broadcasting: 3
I0128 01:08:59.091972       9 log.go:172] (0xc0021c6420) (0xc002a80320) Stream removed, broadcasting: 5
Jan 28 01:08:59.092: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:08:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5817" for this suite.

• [SLOW TEST:38.990 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":216,"skipped":3501,"failed":0}
SSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:08:59.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-2213.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2213.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 01:09:15.399: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.405: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.415: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.421: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.436: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.440: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.465: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.468: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:15.475: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:20.485: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.490: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.494: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.500: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.519: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.524: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.532: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.535: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:20.544: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:25.484: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.492: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.497: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.504: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.548: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.553: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.557: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.561: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:25.570: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:30.484: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.494: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.498: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.502: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.521: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.530: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.537: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.541: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:30.565: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:35.484: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.492: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.497: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.503: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.522: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.527: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.532: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.537: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:35.550: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:40.486: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.492: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.498: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.503: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.524: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.528: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.533: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.538: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local from pod dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4: the server could not find the requested resource (get pods dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4)
Jan 28 01:09:40.547: INFO: Lookups using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2213.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2213.svc.cluster.local jessie_udp@dns-test-service-2.dns-2213.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2213.svc.cluster.local]

Jan 28 01:09:45.521: INFO: DNS probes using dns-2213/dns-test-cb0b9001-0f00-4e2b-9a75-cd229b5e26d4 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:09:45.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2213" for this suite.

• [SLOW TEST:46.833 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":217,"skipped":3504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:09:45.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-fb7fccbc-1cd8-4dcf-a1a9-ed0863122e7b
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:09:58.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6672" for this suite.

• [SLOW TEST:12.486 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":218,"skipped":3535,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:09:58.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 28 01:09:58.515: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 01:10:00.596: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:10:12.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6435" for this suite.

• [SLOW TEST:14.055 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":219,"skipped":3555,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:10:12.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:10:12.572: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Jan 28 01:10:13.313: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:13Z generation:1 name:name1 resourceVersion:4789413 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c4f992c-09de-4f5f-8830-bdebd11483c9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Jan 28 01:10:23.326: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:23Z generation:1 name:name2 resourceVersion:4789441 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dddd00e5-1021-428e-8fc7-9e36a008456e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Jan 28 01:10:33.338: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:13Z generation:2 name:name1 resourceVersion:4789465 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c4f992c-09de-4f5f-8830-bdebd11483c9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Jan 28 01:10:43.348: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:23Z generation:2 name:name2 resourceVersion:4789487 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dddd00e5-1021-428e-8fc7-9e36a008456e] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Jan 28 01:10:53.367: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:13Z generation:2 name:name1 resourceVersion:4789511 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0c4f992c-09de-4f5f-8830-bdebd11483c9] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Jan 28 01:11:03.378: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-28T01:10:23Z generation:2 name:name2 resourceVersion:4789535 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:dddd00e5-1021-428e-8fc7-9e36a008456e] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:11:13.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-3455" for this suite.

• [SLOW TEST:61.446 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":220,"skipped":3558,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:11:13.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:11:14.202: INFO: Create a RollingUpdate DaemonSet
Jan 28 01:11:14.207: INFO: Check that daemon pods launch on every node of the cluster
Jan 28 01:11:14.243: INFO: Number of nodes with available pods: 0
Jan 28 01:11:14.243: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:15.886: INFO: Number of nodes with available pods: 0
Jan 28 01:11:15.886: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:16.467: INFO: Number of nodes with available pods: 0
Jan 28 01:11:16.467: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:17.253: INFO: Number of nodes with available pods: 0
Jan 28 01:11:17.253: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:18.249: INFO: Number of nodes with available pods: 0
Jan 28 01:11:18.249: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:20.597: INFO: Number of nodes with available pods: 0
Jan 28 01:11:20.597: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:21.670: INFO: Number of nodes with available pods: 0
Jan 28 01:11:21.670: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:22.250: INFO: Number of nodes with available pods: 0
Jan 28 01:11:22.250: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:11:23.259: INFO: Number of nodes with available pods: 2
Jan 28 01:11:23.259: INFO: Number of running nodes: 2, number of available pods: 2
Jan 28 01:11:23.259: INFO: Update the DaemonSet to trigger a rollout
Jan 28 01:11:23.270: INFO: Updating DaemonSet daemon-set
Jan 28 01:11:33.375: INFO: Roll back the DaemonSet before rollout is complete
Jan 28 01:11:33.385: INFO: Updating DaemonSet daemon-set
Jan 28 01:11:33.385: INFO: Make sure DaemonSet rollback is complete
Jan 28 01:11:33.442: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:33.442: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:34.485: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:34.485: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:35.466: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:35.466: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:36.468: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:36.468: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:37.465: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:37.465: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:38.530: INFO: Wrong image for pod: daemon-set-nhg2l. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 28 01:11:38.530: INFO: Pod daemon-set-nhg2l is not available
Jan 28 01:11:39.469: INFO: Pod daemon-set-xwpb6 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5140, will wait for the garbage collector to delete the pods
Jan 28 01:11:39.550: INFO: Deleting DaemonSet.extensions daemon-set took: 8.921473ms
Jan 28 01:11:39.950: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.355245ms
Jan 28 01:11:52.370: INFO: Number of nodes with available pods: 0
Jan 28 01:11:52.370: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 01:11:52.373: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5140/daemonsets","resourceVersion":"4789728"},"items":null}

Jan 28 01:11:52.376: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5140/pods","resourceVersion":"4789728"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:11:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5140" for this suite.

• [SLOW TEST:38.474 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":221,"skipped":3567,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:11:52.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:03.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3784" for this suite.

• [SLOW TEST:11.228 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":222,"skipped":3579,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:03.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:12:03.740: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:04.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-98" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":223,"skipped":3580,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:04.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:12:04.587: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89" in namespace "security-context-test-1231" to be "success or failure"
Jan 28 01:12:04.599: INFO: Pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 11.223514ms
Jan 28 01:12:06.606: INFO: Pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018176055s
Jan 28 01:12:08.671: INFO: Pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083364609s
Jan 28 01:12:10.676: INFO: Pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088112821s
Jan 28 01:12:10.676: INFO: Pod "alpine-nnp-false-df9d5547-d1d6-4394-967d-423fd5c0cf89" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:10.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1231" for this suite.

• [SLOW TEST:6.315 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":224,"skipped":3593,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:10.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Jan 28 01:12:10.867: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:11.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4478" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":225,"skipped":3616,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:11.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 28 01:12:11.202: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1958 /api/v1/namespaces/watch-1958/configmaps/e2e-watch-test-watch-closed acbf9dd8-1876-4c52-9d49-bcd370e45d8b 4789860 0 2020-01-28 01:12:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 01:12:11.202: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1958 /api/v1/namespaces/watch-1958/configmaps/e2e-watch-test-watch-closed acbf9dd8-1876-4c52-9d49-bcd370e45d8b 4789861 0 2020-01-28 01:12:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 28 01:12:11.256: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1958 /api/v1/namespaces/watch-1958/configmaps/e2e-watch-test-watch-closed acbf9dd8-1876-4c52-9d49-bcd370e45d8b 4789862 0 2020-01-28 01:12:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jan 28 01:12:11.256: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1958 /api/v1/namespaces/watch-1958/configmaps/e2e-watch-test-watch-closed acbf9dd8-1876-4c52-9d49-bcd370e45d8b 4789863 0 2020-01-28 01:12:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:11.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1958" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":226,"skipped":3635,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:11.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0128 01:12:52.088373       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 28 01:12:52.088: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:12:52.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1013" for this suite.

• [SLOW TEST:40.832 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":227,"skipped":3645,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:12:52.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:12:52.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 28 01:12:54.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1291 create -f -'
Jan 28 01:12:57.578: INFO: stderr: ""
Jan 28 01:12:57.579: INFO: stdout: "e2e-test-crd-publish-openapi-2471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 28 01:12:57.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1291 delete e2e-test-crd-publish-openapi-2471-crds test-cr'
Jan 28 01:12:57.795: INFO: stderr: ""
Jan 28 01:12:57.795: INFO: stdout: "e2e-test-crd-publish-openapi-2471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 28 01:12:57.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1291 apply -f -'
Jan 28 01:12:59.575: INFO: stderr: ""
Jan 28 01:12:59.575: INFO: stdout: "e2e-test-crd-publish-openapi-2471-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 28 01:12:59.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1291 delete e2e-test-crd-publish-openapi-2471-crds test-cr'
Jan 28 01:13:01.865: INFO: stderr: ""
Jan 28 01:13:01.865: INFO: stdout: "e2e-test-crd-publish-openapi-2471-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 28 01:13:01.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2471-crds'
Jan 28 01:13:02.420: INFO: stderr: ""
Jan 28 01:13:02.421: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2471-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:13:07.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1291" for this suite.

• [SLOW TEST:15.949 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":228,"skipped":3645,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:13:08.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 28 01:13:09.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:13:27.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7747" for this suite.

• [SLOW TEST:19.294 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":229,"skipped":3654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:13:27.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:13:27.477: INFO: Creating ReplicaSet my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd
Jan 28 01:13:27.489: INFO: Pod name my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd: Found 0 pods out of 1
Jan 28 01:13:32.501: INFO: Pod name my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd: Found 1 pods out of 1
Jan 28 01:13:32.501: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd" is running
Jan 28 01:13:36.560: INFO: Pod "my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd-kwcs9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 01:13:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 01:13:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 01:13:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-28 01:13:27 +0000 UTC Reason: Message:}])
Jan 28 01:13:36.561: INFO: Trying to dial the pod
Jan 28 01:13:41.589: INFO: Controller my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd: Got expected result from replica 1 [my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd-kwcs9]: "my-hostname-basic-3d1df94c-b221-436f-802b-403221be9ebd-kwcs9", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:13:41.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8140" for this suite.

• [SLOW TEST:14.270 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":230,"skipped":3723,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:13:41.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Jan 28 01:13:42.674: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix920735578/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:13:42.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1737" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":231,"skipped":3736,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:13:42.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 01:13:42.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed" in namespace "downward-api-1204" to be "success or failure"
Jan 28 01:13:42.943: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.973852ms
Jan 28 01:13:44.967: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027375293s
Jan 28 01:13:46.986: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046499278s
Jan 28 01:13:49.001: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061360677s
Jan 28 01:13:51.006: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066561183s
Jan 28 01:13:53.014: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.074882795s
STEP: Saw pod success
Jan 28 01:13:53.015: INFO: Pod "downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed" satisfied condition "success or failure"
Jan 28 01:13:53.048: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed container client-container: 
STEP: delete the pod
Jan 28 01:13:53.238: INFO: Waiting for pod downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed to disappear
Jan 28 01:13:53.338: INFO: Pod downwardapi-volume-c1a7f4ee-7f53-4fc6-b77c-e9bfbce296ed no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:13:53.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1204" for this suite.

• [SLOW TEST:10.564 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":232,"skipped":3791,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:13:53.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 28 01:13:53.653: INFO: Waiting up to 5m0s for pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea" in namespace "emptydir-7186" to be "success or failure"
Jan 28 01:13:53.676: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 23.47328ms
Jan 28 01:13:55.683: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030655007s
Jan 28 01:13:57.689: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036089457s
Jan 28 01:13:59.713: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060407467s
Jan 28 01:14:01.774: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121670775s
STEP: Saw pod success
Jan 28 01:14:01.774: INFO: Pod "pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea" satisfied condition "success or failure"
Jan 28 01:14:01.780: INFO: Trying to get logs from node jerma-node pod pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea container test-container: 
STEP: delete the pod
Jan 28 01:14:01.839: INFO: Waiting for pod pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea to disappear
Jan 28 01:14:01.915: INFO: Pod pod-b4291d41-3c93-4b4e-8bec-4510380dd3ea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:01.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7186" for this suite.

• [SLOW TEST:8.556 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":233,"skipped":3803,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:01.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 28 01:14:02.020: INFO: >>> kubeConfig: /root/.kube/config
Jan 28 01:14:05.128: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:18.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-914" for this suite.

• [SLOW TEST:16.292 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":234,"skipped":3816,"failed":0}
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:18.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 28 01:14:18.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2113'
Jan 28 01:14:18.722: INFO: stderr: ""
Jan 28 01:14:18.722: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 28 01:14:19.729: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:19.729: INFO: Found 0 / 1
Jan 28 01:14:20.728: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:20.728: INFO: Found 0 / 1
Jan 28 01:14:21.728: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:21.728: INFO: Found 0 / 1
Jan 28 01:14:22.737: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:22.737: INFO: Found 0 / 1
Jan 28 01:14:23.796: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:23.796: INFO: Found 0 / 1
Jan 28 01:14:24.728: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:24.728: INFO: Found 1 / 1
Jan 28 01:14:24.728: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 28 01:14:24.731: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:24.731: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 01:14:24.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-tmvw4 --namespace=kubectl-2113 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 28 01:14:24.866: INFO: stderr: ""
Jan 28 01:14:24.866: INFO: stdout: "pod/agnhost-master-tmvw4 patched\n"
STEP: checking annotations
Jan 28 01:14:24.870: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:14:24.870: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:24.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2113" for this suite.

• [SLOW TEST:6.650 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":235,"skipped":3816,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:24.878: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:14:25.022: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854" in namespace "security-context-test-4957" to be "success or failure"
Jan 28 01:14:25.106: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854": Phase="Pending", Reason="", readiness=false. Elapsed: 83.347702ms
Jan 28 01:14:27.113: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090038286s
Jan 28 01:14:29.119: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095991382s
Jan 28 01:14:31.126: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103348623s
Jan 28 01:14:33.135: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.11187652s
Jan 28 01:14:33.135: INFO: Pod "busybox-user-65534-fa51c5ff-cc58-4655-9db8-3cc58a7a1854" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:33.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4957" for this suite.

• [SLOW TEST:8.272 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3840,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:33.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-c73ef30a-8e46-4d4a-94e8-8e938becaa1d
STEP: Creating a pod to test consume configMaps
Jan 28 01:14:33.354: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d" in namespace "projected-3793" to be "success or failure"
Jan 28 01:14:33.359: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594374ms
Jan 28 01:14:35.369: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014954119s
Jan 28 01:14:37.382: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02760634s
Jan 28 01:14:39.388: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033920268s
Jan 28 01:14:41.396: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041200111s
STEP: Saw pod success
Jan 28 01:14:41.396: INFO: Pod "pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d" satisfied condition "success or failure"
Jan 28 01:14:41.401: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 01:14:41.481: INFO: Waiting for pod pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d to disappear
Jan 28 01:14:41.548: INFO: Pod pod-projected-configmaps-6b47acf8-573a-4f46-ad9d-92dbc22d486d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:41.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3793" for this suite.

• [SLOW TEST:8.415 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":237,"skipped":3845,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:41.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:14:48.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4382" for this suite.

• [SLOW TEST:7.207 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":238,"skipped":3884,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:14:48.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-caf1b0c4-9f28-4b42-bb8e-b17ae1396001
STEP: Creating secret with name s-test-opt-upd-7f206b8b-cd5d-44c4-b19b-db9478e46b35
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-caf1b0c4-9f28-4b42-bb8e-b17ae1396001
STEP: Updating secret s-test-opt-upd-7f206b8b-cd5d-44c4-b19b-db9478e46b35
STEP: Creating secret with name s-test-opt-create-0490352d-0154-4ec0-8b99-3c37acf6dc44
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:16:26.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2214" for this suite.

• [SLOW TEST:97.465 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":239,"skipped":3895,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:16:26.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-846d7ad5-1562-427d-885b-5b3183d48bce
STEP: Creating a pod to test consume secrets
Jan 28 01:16:26.345: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948" in namespace "projected-5814" to be "success or failure"
Jan 28 01:16:26.449: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Pending", Reason="", readiness=false. Elapsed: 103.619543ms
Jan 28 01:16:28.539: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194432508s
Jan 28 01:16:30.546: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201022347s
Jan 28 01:16:32.554: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Pending", Reason="", readiness=false. Elapsed: 6.209020021s
Jan 28 01:16:34.577: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Pending", Reason="", readiness=false. Elapsed: 8.231953372s
Jan 28 01:16:36.584: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.238898859s
STEP: Saw pod success
Jan 28 01:16:36.584: INFO: Pod "pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948" satisfied condition "success or failure"
Jan 28 01:16:36.588: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948 container secret-volume-test: 
STEP: delete the pod
Jan 28 01:16:36.659: INFO: Waiting for pod pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948 to disappear
Jan 28 01:16:36.674: INFO: Pod pod-projected-secrets-549b2269-8275-4cc6-8867-95d82cd1f948 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:16:36.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5814" for this suite.

• [SLOW TEST:10.574 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":240,"skipped":3905,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:16:36.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:16:38.122: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 28 01:16:40.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:16:42.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:16:44.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715770998, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:16:47.196: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:16:47.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:16:48.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-250" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:12.039 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":241,"skipped":3913,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:16:48.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 28 01:16:57.564: INFO: Successfully updated pod "pod-update-66db0e9f-d5a9-4a56-824e-38d66262aa4e"
STEP: verifying the updated pod is in kubernetes
Jan 28 01:16:57.581: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:16:57.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1518" for this suite.

• [SLOW TEST:8.745 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3956,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:16:57.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 28 01:17:09.793: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6939 PodName:pod-sharedvolume-a442c40d-7202-4a7d-9123-4ede7c2ea099 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:17:09.793: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:17:09.874457       9 log.go:172] (0xc002227760) (0xc001bb8960) Create stream
I0128 01:17:09.874569       9 log.go:172] (0xc002227760) (0xc001bb8960) Stream added, broadcasting: 1
I0128 01:17:09.881277       9 log.go:172] (0xc002227760) Reply frame received for 1
I0128 01:17:09.881429       9 log.go:172] (0xc002227760) (0xc0002dbb80) Create stream
I0128 01:17:09.881455       9 log.go:172] (0xc002227760) (0xc0002dbb80) Stream added, broadcasting: 3
I0128 01:17:09.883925       9 log.go:172] (0xc002227760) Reply frame received for 3
I0128 01:17:09.883983       9 log.go:172] (0xc002227760) (0xc001bfeaa0) Create stream
I0128 01:17:09.884000       9 log.go:172] (0xc002227760) (0xc001bfeaa0) Stream added, broadcasting: 5
I0128 01:17:09.885755       9 log.go:172] (0xc002227760) Reply frame received for 5
I0128 01:17:09.976623       9 log.go:172] (0xc002227760) Data frame received for 3
I0128 01:17:09.976720       9 log.go:172] (0xc0002dbb80) (3) Data frame handling
I0128 01:17:09.976769       9 log.go:172] (0xc0002dbb80) (3) Data frame sent
I0128 01:17:10.068375       9 log.go:172] (0xc002227760) Data frame received for 1
I0128 01:17:10.068436       9 log.go:172] (0xc002227760) (0xc0002dbb80) Stream removed, broadcasting: 3
I0128 01:17:10.068473       9 log.go:172] (0xc001bb8960) (1) Data frame handling
I0128 01:17:10.068487       9 log.go:172] (0xc001bb8960) (1) Data frame sent
I0128 01:17:10.068500       9 log.go:172] (0xc002227760) (0xc001bb8960) Stream removed, broadcasting: 1
I0128 01:17:10.068860       9 log.go:172] (0xc002227760) (0xc001bfeaa0) Stream removed, broadcasting: 5
I0128 01:17:10.068932       9 log.go:172] (0xc002227760) (0xc001bb8960) Stream removed, broadcasting: 1
I0128 01:17:10.068956       9 log.go:172] (0xc002227760) (0xc0002dbb80) Stream removed, broadcasting: 3
I0128 01:17:10.068977       9 log.go:172] (0xc002227760) (0xc001bfeaa0) Stream removed, broadcasting: 5
Jan 28 01:17:10.069: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:17:10.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6939" for this suite.

• [SLOW TEST:12.484 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":243,"skipped":3969,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:17:10.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 28 01:17:10.239: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 01:17:10.252: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 01:17:10.255: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 01:17:10.262: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 01:17:10.262: INFO: 	Container weave ready: true, restart count 1
Jan 28 01:17:10.262: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 01:17:10.262: INFO: pod-sharedvolume-a442c40d-7202-4a7d-9123-4ede7c2ea099 from emptydir-6939 started at 2020-01-28 01:16:57 +0000 UTC (2 container statuses recorded)
Jan 28 01:17:10.262: INFO: 	Container busybox-main-container ready: true, restart count 0
Jan 28 01:17:10.262: INFO: 	Container busybox-sub-container ready: false, restart count 0
Jan 28 01:17:10.262: INFO: pod-update-66db0e9f-d5a9-4a56-824e-38d66262aa4e from pods-1518 started at 2020-01-28 01:16:49 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.262: INFO: 	Container nginx ready: false, restart count 0
Jan 28 01:17:10.262: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.262: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 01:17:10.262: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 01:17:10.282: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 01:17:10.282: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 01:17:10.282: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container etcd ready: true, restart count 1
Jan 28 01:17:10.282: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container coredns ready: true, restart count 0
Jan 28 01:17:10.282: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container coredns ready: true, restart count 0
Jan 28 01:17:10.282: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 01:17:10.282: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 01:17:10.282: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 01:17:10.282: INFO: 	Container weave ready: true, restart count 0
Jan 28 01:17:10.282: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod pod-sharedvolume-a442c40d-7202-4a7d-9123-4ede7c2ea099 requesting resource cpu=0m on Node jerma-node
Jan 28 01:17:10.425: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 28 01:17:10.425: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 28 01:17:10.425: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan 28 01:17:10.425: INFO: Pod pod-update-66db0e9f-d5a9-4a56-824e-38d66262aa4e requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan 28 01:17:10.425: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 28 01:17:10.432: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48449903-044c-45ae-b285-5e3a32b643b7.15ede8172863203e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7955/filler-pod-48449903-044c-45ae-b285-5e3a32b643b7 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48449903-044c-45ae-b285-5e3a32b643b7.15ede81826dec6f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48449903-044c-45ae-b285-5e3a32b643b7.15ede818d2fdc7a0], Reason = [Created], Message = [Created container filler-pod-48449903-044c-45ae-b285-5e3a32b643b7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-48449903-044c-45ae-b285-5e3a32b643b7.15ede819119b7b92], Reason = [Started], Message = [Started container filler-pod-48449903-044c-45ae-b285-5e3a32b643b7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86977102-0c12-4c56-8c85-fd128ce50686.15ede8172aa6a2d5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7955/filler-pod-86977102-0c12-4c56-8c85-fd128ce50686 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86977102-0c12-4c56-8c85-fd128ce50686.15ede8186ef4e3e7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86977102-0c12-4c56-8c85-fd128ce50686.15ede8192b75db2b], Reason = [Created], Message = [Created container filler-pod-86977102-0c12-4c56-8c85-fd128ce50686]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-86977102-0c12-4c56-8c85-fd128ce50686.15ede819496b810e], Reason = [Started], Message = [Started container filler-pod-86977102-0c12-4c56-8c85-fd128ce50686]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ede81980c81bc4], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:17:21.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7955" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:11.524 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":244,"skipped":3970,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:17:21.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 01:17:21.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6515'
Jan 28 01:17:21.911: INFO: stderr: ""
Jan 28 01:17:21.911: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 28 01:17:31.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-6515 -o json'
Jan 28 01:17:32.154: INFO: stderr: ""
Jan 28 01:17:32.155: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-28T01:17:21Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-6515\",\n        \"resourceVersion\": \"4791264\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6515/pods/e2e-test-httpd-pod\",\n        \"uid\": \"dd1ff489-5bb2-4b41-b1c6-ddc06f7489c4\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-w6xzg\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-server-mvvl6gufaqub\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-w6xzg\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-w6xzg\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T01:17:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T01:17:28Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T01:17:28Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-28T01:17:21Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://687bab4f58b5d89439bcb44fc35ed5d2627953eedff89b74ff39a198fd38a5a9\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-28T01:17:27Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.1.234\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.32.0.5\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.32.0.5\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-28T01:17:21Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 28 01:17:32.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6515'
Jan 28 01:17:32.676: INFO: stderr: ""
Jan 28 01:17:32.676: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Jan 28 01:17:32.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6515'
Jan 28 01:17:40.288: INFO: stderr: ""
Jan 28 01:17:40.288: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:17:40.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6515" for this suite.

• [SLOW TEST:18.695 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":245,"skipped":3972,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:17:40.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:17:40.539: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c38d0c82-b8e2-4000-9a56-c8d47444ed87", Controller:(*bool)(0xc003a00392), BlockOwnerDeletion:(*bool)(0xc003a00393)}}
Jan 28 01:17:40.555: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"73b9d95e-2a34-42c1-ac75-01c7edf03094", Controller:(*bool)(0xc004531eb2), BlockOwnerDeletion:(*bool)(0xc004531eb3)}}
Jan 28 01:17:40.570: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a4c3a259-df23-48b5-903b-f0a0a6e61753", Controller:(*bool)(0xc00060459a), BlockOwnerDeletion:(*bool)(0xc00060459b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:17:45.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8582" for this suite.

• [SLOW TEST:5.388 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":246,"skipped":3981,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:17:45.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:17:47.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:0, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}}, CollisionCount:(*int32)(nil)}
Jan 28 01:17:49.271: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:17:51.287: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771067, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:17:54.310: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:17:54.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3418" for this suite.
STEP: Destroying namespace "webhook-3418-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:8.985 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":247,"skipped":3985,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:17:54.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:17:55.445: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 01:17:57.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:17:59.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:18:01.471: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:18:03.466: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771075, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:18:06.632: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:18:18.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9150" for this suite.
STEP: Destroying namespace "webhook-9150-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:24.484 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":248,"skipped":3997,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:18:19.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 28 01:18:19.223: INFO: Waiting up to 5m0s for pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f" in namespace "emptydir-3572" to be "success or failure"
Jan 28 01:18:19.271: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Pending", Reason="", readiness=false. Elapsed: 48.029889ms
Jan 28 01:18:21.299: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076035863s
Jan 28 01:18:23.335: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111668857s
Jan 28 01:18:25.348: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.12526872s
Jan 28 01:18:27.359: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136161796s
Jan 28 01:18:29.371: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.148362399s
STEP: Saw pod success
Jan 28 01:18:29.371: INFO: Pod "pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f" satisfied condition "success or failure"
Jan 28 01:18:29.375: INFO: Trying to get logs from node jerma-node pod pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f container test-container: 
STEP: delete the pod
Jan 28 01:18:29.771: INFO: Waiting for pod pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f to disappear
Jan 28 01:18:29.789: INFO: Pod pod-0cfdb50f-2cc8-4d6f-ac8c-6e2cee38a19f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:18:29.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3572" for this suite.

• [SLOW TEST:10.636 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":249,"skipped":4001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:18:29.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 28 01:18:30.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:18:47.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5160" for this suite.

• [SLOW TEST:18.163 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":250,"skipped":4037,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:18:47.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 01:19:00.158: INFO: DNS probes using dns-test-57eff9a7-de2e-4ac6-85ce-c6d49e1706e1 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 01:19:14.626: INFO: File wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:14.672: INFO: File jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:14.673: INFO: Lookups using dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 failed for: [wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local]

Jan 28 01:19:19.681: INFO: File wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:19.686: INFO: File jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:19.686: INFO: Lookups using dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 failed for: [wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local]

Jan 28 01:19:24.681: INFO: File wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:24.686: INFO: File jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local from pod  dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 28 01:19:24.686: INFO: Lookups using dns-7022/dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 failed for: [wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local]

Jan 28 01:19:29.683: INFO: DNS probes using dns-test-3de63f8e-5c52-40d3-b39c-6b421a6c5249 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7022.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7022.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 01:19:44.012: INFO: DNS probes using dns-test-03b88f28-5f43-41c9-898b-6d3fda5933a0 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:19:44.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7022" for this suite.

• [SLOW TEST:56.192 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":251,"skipped":4042,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:19:44.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2809.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 28 01:19:58.418: INFO: DNS probes using dns-2809/dns-test-43e76b06-09de-4878-bb80-35f0b439d277 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:19:58.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2809" for this suite.

• [SLOW TEST:14.357 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":252,"skipped":4054,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:19:58.524: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-8505
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 28 01:19:58.683: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jan 28 01:19:58.767: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:00.788: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:02.771: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:04.893: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:06.905: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:08.775: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jan 28 01:20:10.787: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:12.772: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:14.778: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:16.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:18.772: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:20.785: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:22.773: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jan 28 01:20:24.774: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jan 28 01:20:24.783: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jan 28 01:20:32.914: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8505 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:20:32.914: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:20:32.964929       9 log.go:172] (0xc00480c6e0) (0xc001b401e0) Create stream
I0128 01:20:32.964990       9 log.go:172] (0xc00480c6e0) (0xc001b401e0) Stream added, broadcasting: 1
I0128 01:20:32.977869       9 log.go:172] (0xc00480c6e0) Reply frame received for 1
I0128 01:20:32.978012       9 log.go:172] (0xc00480c6e0) (0xc001c4a0a0) Create stream
I0128 01:20:32.978040       9 log.go:172] (0xc00480c6e0) (0xc001c4a0a0) Stream added, broadcasting: 3
I0128 01:20:32.980565       9 log.go:172] (0xc00480c6e0) Reply frame received for 3
I0128 01:20:32.980654       9 log.go:172] (0xc00480c6e0) (0xc001b40280) Create stream
I0128 01:20:32.980667       9 log.go:172] (0xc00480c6e0) (0xc001b40280) Stream added, broadcasting: 5
I0128 01:20:32.983064       9 log.go:172] (0xc00480c6e0) Reply frame received for 5
I0128 01:20:34.077402       9 log.go:172] (0xc00480c6e0) Data frame received for 3
I0128 01:20:34.077515       9 log.go:172] (0xc001c4a0a0) (3) Data frame handling
I0128 01:20:34.077562       9 log.go:172] (0xc001c4a0a0) (3) Data frame sent
I0128 01:20:34.179596       9 log.go:172] (0xc00480c6e0) (0xc001b40280) Stream removed, broadcasting: 5
I0128 01:20:34.179719       9 log.go:172] (0xc00480c6e0) Data frame received for 1
I0128 01:20:34.179747       9 log.go:172] (0xc00480c6e0) (0xc001c4a0a0) Stream removed, broadcasting: 3
I0128 01:20:34.179791       9 log.go:172] (0xc001b401e0) (1) Data frame handling
I0128 01:20:34.179813       9 log.go:172] (0xc001b401e0) (1) Data frame sent
I0128 01:20:34.179826       9 log.go:172] (0xc00480c6e0) (0xc001b401e0) Stream removed, broadcasting: 1
I0128 01:20:34.179859       9 log.go:172] (0xc00480c6e0) Go away received
I0128 01:20:34.180147       9 log.go:172] (0xc00480c6e0) (0xc001b401e0) Stream removed, broadcasting: 1
I0128 01:20:34.180161       9 log.go:172] (0xc00480c6e0) (0xc001c4a0a0) Stream removed, broadcasting: 3
I0128 01:20:34.180173       9 log.go:172] (0xc00480c6e0) (0xc001b40280) Stream removed, broadcasting: 5
Jan 28 01:20:34.180: INFO: Found all expected endpoints: [netserver-0]
Jan 28 01:20:34.184: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8505 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 28 01:20:34.184: INFO: >>> kubeConfig: /root/.kube/config
I0128 01:20:34.222313       9 log.go:172] (0xc004a40370) (0xc002a25c20) Create stream
I0128 01:20:34.222411       9 log.go:172] (0xc004a40370) (0xc002a25c20) Stream added, broadcasting: 1
I0128 01:20:34.226279       9 log.go:172] (0xc004a40370) Reply frame received for 1
I0128 01:20:34.226325       9 log.go:172] (0xc004a40370) (0xc001c4a1e0) Create stream
I0128 01:20:34.226340       9 log.go:172] (0xc004a40370) (0xc001c4a1e0) Stream added, broadcasting: 3
I0128 01:20:34.227995       9 log.go:172] (0xc004a40370) Reply frame received for 3
I0128 01:20:34.228016       9 log.go:172] (0xc004a40370) (0xc001b40460) Create stream
I0128 01:20:34.228023       9 log.go:172] (0xc004a40370) (0xc001b40460) Stream added, broadcasting: 5
I0128 01:20:34.229457       9 log.go:172] (0xc004a40370) Reply frame received for 5
I0128 01:20:35.304274       9 log.go:172] (0xc004a40370) Data frame received for 3
I0128 01:20:35.304323       9 log.go:172] (0xc001c4a1e0) (3) Data frame handling
I0128 01:20:35.304417       9 log.go:172] (0xc001c4a1e0) (3) Data frame sent
I0128 01:20:35.399068       9 log.go:172] (0xc004a40370) Data frame received for 1
I0128 01:20:35.399133       9 log.go:172] (0xc002a25c20) (1) Data frame handling
I0128 01:20:35.399167       9 log.go:172] (0xc002a25c20) (1) Data frame sent
I0128 01:20:35.399454       9 log.go:172] (0xc004a40370) (0xc002a25c20) Stream removed, broadcasting: 1
I0128 01:20:35.400116       9 log.go:172] (0xc004a40370) (0xc001c4a1e0) Stream removed, broadcasting: 3
I0128 01:20:35.401017       9 log.go:172] (0xc004a40370) (0xc001b40460) Stream removed, broadcasting: 5
I0128 01:20:35.401086       9 log.go:172] (0xc004a40370) Go away received
I0128 01:20:35.401355       9 log.go:172] (0xc004a40370) (0xc002a25c20) Stream removed, broadcasting: 1
I0128 01:20:35.401515       9 log.go:172] (0xc004a40370) (0xc001c4a1e0) Stream removed, broadcasting: 3
I0128 01:20:35.401574       9 log.go:172] (0xc004a40370) (0xc001b40460) Stream removed, broadcasting: 5
Jan 28 01:20:35.401: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:20:35.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8505" for this suite.

• [SLOW TEST:36.901 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":253,"skipped":4060,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:20:35.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:20:35.956: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 01:20:37.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:20:39.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:20:42.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:20:44.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:20:45.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:20:47.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771236, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771235, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:20:51.008: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:20:52.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4474" for this suite.
STEP: Destroying namespace "webhook-4474-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.084 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":254,"skipped":4088,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:20:52.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-22640954-1a09-4ae2-8328-b74dcef2ae86
STEP: Creating a pod to test consume secrets
Jan 28 01:20:52.654: INFO: Waiting up to 5m0s for pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0" in namespace "secrets-15" to be "success or failure"
Jan 28 01:20:52.691: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 36.293993ms
Jan 28 01:20:54.695: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041116661s
Jan 28 01:20:56.716: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06184429s
Jan 28 01:20:58.726: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07173083s
Jan 28 01:21:00.739: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084442864s
Jan 28 01:21:02.745: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090718054s
STEP: Saw pod success
Jan 28 01:21:02.745: INFO: Pod "pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0" satisfied condition "success or failure"
Jan 28 01:21:02.750: INFO: Trying to get logs from node jerma-node pod pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0 container secret-volume-test: 
STEP: delete the pod
Jan 28 01:21:02.979: INFO: Waiting for pod pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0 to disappear
Jan 28 01:21:02.987: INFO: Pod pod-secrets-ee9ef473-ca4a-49db-b3c4-ae39ed86dbe0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:21:02.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-15" for this suite.

• [SLOW TEST:10.491 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4121,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:21:03.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 01:21:03.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215" in namespace "projected-9177" to be "success or failure"
Jan 28 01:21:03.259: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215": Phase="Pending", Reason="", readiness=false. Elapsed: 11.722259ms
Jan 28 01:21:05.289: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041788434s
Jan 28 01:21:07.295: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04748897s
Jan 28 01:21:09.305: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05786877s
Jan 28 01:21:11.314: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066319023s
STEP: Saw pod success
Jan 28 01:21:11.314: INFO: Pod "downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215" satisfied condition "success or failure"
Jan 28 01:21:11.319: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215 container client-container: 
STEP: delete the pod
Jan 28 01:21:11.373: INFO: Waiting for pod downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215 to disappear
Jan 28 01:21:11.396: INFO: Pod downwardapi-volume-acd498de-06aa-497b-8ae7-f56079523215 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:21:11.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9177" for this suite.

• [SLOW TEST:8.485 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":256,"skipped":4132,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:21:11.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-38076490-467f-4bf3-8af2-0cfeb5c84b81 in namespace container-probe-469
Jan 28 01:21:21.713: INFO: Started pod liveness-38076490-467f-4bf3-8af2-0cfeb5c84b81 in namespace container-probe-469
STEP: checking the pod's current state and verifying that restartCount is present
Jan 28 01:21:21.717: INFO: Initial restart count of pod liveness-38076490-467f-4bf3-8af2-0cfeb5c84b81 is 0
Jan 28 01:21:45.863: INFO: Restart count of pod container-probe-469/liveness-38076490-467f-4bf3-8af2-0cfeb5c84b81 is now 1 (24.145971846s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:21:45.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-469" for this suite.

• [SLOW TEST:34.425 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":257,"skipped":4147,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:21:45.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:21:46.035: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 28 01:21:51.040: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 28 01:21:57.050: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 28 01:21:59.058: INFO: Creating deployment "test-rollover-deployment"
Jan 28 01:21:59.079: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 28 01:22:01.098: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 28 01:22:01.110: INFO: Ensure that both replica sets have 1 created replica
Jan 28 01:22:01.118: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 28 01:22:01.129: INFO: Updating deployment test-rollover-deployment
Jan 28 01:22:01.129: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 28 01:22:03.151: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 28 01:22:03.163: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 28 01:22:03.168: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:03.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771321, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:05.182: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:05.183: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771321, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:07.184: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:07.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771321, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:09.182: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:09.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771321, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:11.179: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:11.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771329, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:13.180: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:13.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771329, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:15.181: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:15.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771329, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:17.183: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:17.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771329, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:19.180: INFO: all replica sets need to contain the pod-template-hash label
Jan 28 01:22:19.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771329, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771319, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:22:21.181: INFO: 
Jan 28 01:22:21.181: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jan 28 01:22:21.195: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2097 /apis/apps/v1/namespaces/deployment-2097/deployments/test-rollover-deployment 9e0c1bbe-7ca6-4e1d-b235-4deb4d57472f 4792618 2 2020-01-28 01:21:59 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0043e8c58  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-28 01:21:59 +0000 UTC,LastTransitionTime:2020-01-28 01:21:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-28 01:22:20 +0000 UTC,LastTransitionTime:2020-01-28 01:21:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 28 01:22:21.200: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-2097 /apis/apps/v1/namespaces/deployment-2097/replicasets/test-rollover-deployment-574d6dfbff 4dfb670b-22e3-484f-8b37-cb52f84483ff 4792607 2 2020-01-28 01:22:01 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9e0c1bbe-7ca6-4e1d-b235-4deb4d57472f 0xc005c31917 0xc005c31918}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c31988  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 28 01:22:21.200: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 28 01:22:21.200: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2097 /apis/apps/v1/namespaces/deployment-2097/replicasets/test-rollover-controller 7bc46fee-ac24-4004-9ec4-7d8f9cd2339c 4792616 2 2020-01-28 01:21:46 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9e0c1bbe-7ca6-4e1d-b235-4deb4d57472f 0xc005c31847 0xc005c31848}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005c318a8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 01:22:21.200: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2097 /apis/apps/v1/namespaces/deployment-2097/replicasets/test-rollover-deployment-f6c94f66c 307bb2bd-f8b7-4d65-bca1-fb09a43ba4dc 4792555 2 2020-01-28 01:21:59 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9e0c1bbe-7ca6-4e1d-b235-4deb4d57472f 0xc005c319f0 0xc005c319f1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c31a68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 28 01:22:21.204: INFO: Pod "test-rollover-deployment-574d6dfbff-g47kg" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-g47kg test-rollover-deployment-574d6dfbff- deployment-2097 /api/v1/namespaces/deployment-2097/pods/test-rollover-deployment-574d6dfbff-g47kg 13563f46-b843-4af5-8ca6-46e20bd5e896 4792581 0 2020-01-28 01:22:01 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4dfb670b-22e3-484f-8b37-cb52f84483ff 0xc005c31fc7 0xc005c31fc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mdhlf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mdhlf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mdhlf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 01:22:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 01:22:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 01:22:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-28 01:22:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-28 01:22:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-28 01:22:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://53ba599d345793dd17fee2eb13deac316c449d6debd8693aa0cbe7f7de84c1bc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:22:21.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2097" for this suite.

• [SLOW TEST:35.297 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":258,"skipped":4158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:22:21.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-fcd1ccbb-b430-44ac-8dac-a5f59b18b2ae
STEP: Creating a pod to test consume secrets
Jan 28 01:22:21.630: INFO: Waiting up to 5m0s for pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f" in namespace "secrets-4360" to be "success or failure"
Jan 28 01:22:21.658: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 27.594558ms
Jan 28 01:22:23.667: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036509785s
Jan 28 01:22:25.673: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042873311s
Jan 28 01:22:27.679: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048287085s
Jan 28 01:22:29.725: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.094752925s
Jan 28 01:22:31.733: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.102251032s
Jan 28 01:22:33.739: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.109144069s
STEP: Saw pod success
Jan 28 01:22:33.739: INFO: Pod "pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f" satisfied condition "success or failure"
Jan 28 01:22:33.742: INFO: Trying to get logs from node jerma-node pod pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f container secret-volume-test: 
STEP: delete the pod
Jan 28 01:22:33.869: INFO: Waiting for pod pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f to disappear
Jan 28 01:22:33.884: INFO: Pod pod-secrets-d4c028fa-b19b-4a54-a6fe-7bc925e6ba1f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:22:33.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4360" for this suite.

• [SLOW TEST:12.740 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4188,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:22:33.958: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-8d065941-8799-473a-8db5-d8c8fd4a9b50
STEP: Creating a pod to test consume secrets
Jan 28 01:22:34.100: INFO: Waiting up to 5m0s for pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d" in namespace "secrets-8366" to be "success or failure"
Jan 28 01:22:34.143: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.741544ms
Jan 28 01:22:36.149: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048911498s
Jan 28 01:22:38.156: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055873042s
Jan 28 01:22:40.165: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065055576s
Jan 28 01:22:42.174: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.07377989s
STEP: Saw pod success
Jan 28 01:22:42.174: INFO: Pod "pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d" satisfied condition "success or failure"
Jan 28 01:22:42.178: INFO: Trying to get logs from node jerma-node pod pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d container secret-volume-test: 
STEP: delete the pod
Jan 28 01:22:42.246: INFO: Waiting for pod pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d to disappear
Jan 28 01:22:42.262: INFO: Pod pod-secrets-948cca9d-aeb1-4e7b-8c4e-3aa0b03d4b0d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:22:42.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8366" for this suite.

• [SLOW TEST:8.320 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":260,"skipped":4228,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:22:42.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:22:59.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6918" for this suite.

• [SLOW TEST:17.322 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":261,"skipped":4240,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:22:59.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 28 01:22:59.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2391'
Jan 28 01:23:02.428: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 28 01:23:02.428: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Jan 28 01:23:04.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2391'
Jan 28 01:23:04.750: INFO: stderr: ""
Jan 28 01:23:04.751: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:04.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2391" for this suite.

• [SLOW TEST:5.171 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1592
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":262,"skipped":4254,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:04.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-0286aa7e-875d-4cd5-8b18-9190d4948cf8
STEP: Creating a pod to test consume configMaps
Jan 28 01:23:04.908: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d" in namespace "projected-1326" to be "success or failure"
Jan 28 01:23:04.967: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Pending", Reason="", readiness=false. Elapsed: 59.344659ms
Jan 28 01:23:07.102: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194066639s
Jan 28 01:23:09.109: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.200790843s
Jan 28 01:23:11.121: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.212820446s
Jan 28 01:23:13.129: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.220979072s
Jan 28 01:23:15.137: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228998291s
STEP: Saw pod success
Jan 28 01:23:15.137: INFO: Pod "pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d" satisfied condition "success or failure"
Jan 28 01:23:15.144: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d container projected-configmap-volume-test: 
STEP: delete the pod
Jan 28 01:23:15.272: INFO: Waiting for pod pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d to disappear
Jan 28 01:23:15.297: INFO: Pod pod-projected-configmaps-e02bed86-b9be-443f-b084-3548e840a21d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:15.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1326" for this suite.

• [SLOW TEST:10.552 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4275,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:15.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override all
Jan 28 01:23:15.495: INFO: Waiting up to 5m0s for pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b" in namespace "containers-8483" to be "success or failure"
Jan 28 01:23:15.630: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 134.724417ms
Jan 28 01:23:17.635: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140279s
Jan 28 01:23:19.680: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184701901s
Jan 28 01:23:21.688: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19334367s
Jan 28 01:23:23.703: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.207847012s
Jan 28 01:23:25.709: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.214166134s
STEP: Saw pod success
Jan 28 01:23:25.709: INFO: Pod "client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b" satisfied condition "success or failure"
Jan 28 01:23:25.714: INFO: Trying to get logs from node jerma-node pod client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b container test-container: 
STEP: delete the pod
Jan 28 01:23:25.805: INFO: Waiting for pod client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b to disappear
Jan 28 01:23:25.817: INFO: Pod client-containers-bf691f30-380e-471a-aa8d-b21ef7792a4b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:25.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8483" for this suite.

• [SLOW TEST:10.514 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4319,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:25.843: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-tckl
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 01:23:25.993: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tckl" in namespace "subpath-3640" to be "success or failure"
Jan 28 01:23:26.016: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Pending", Reason="", readiness=false. Elapsed: 23.384452ms
Jan 28 01:23:28.025: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031613281s
Jan 28 01:23:30.607: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.614364622s
Jan 28 01:23:32.621: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62846741s
Jan 28 01:23:34.628: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 8.634868491s
Jan 28 01:23:36.637: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 10.643612886s
Jan 28 01:23:38.646: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 12.652662978s
Jan 28 01:23:40.651: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 14.658013398s
Jan 28 01:23:42.655: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 16.662256555s
Jan 28 01:23:44.667: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 18.674048168s
Jan 28 01:23:46.675: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 20.682501333s
Jan 28 01:23:48.684: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 22.690665136s
Jan 28 01:23:50.691: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 24.69824902s
Jan 28 01:23:52.712: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 26.718683424s
Jan 28 01:23:54.722: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Running", Reason="", readiness=true. Elapsed: 28.728991979s
Jan 28 01:23:56.729: INFO: Pod "pod-subpath-test-secret-tckl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.735744075s
STEP: Saw pod success
Jan 28 01:23:56.729: INFO: Pod "pod-subpath-test-secret-tckl" satisfied condition "success or failure"
Jan 28 01:23:56.733: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-tckl container test-container-subpath-secret-tckl: 
STEP: delete the pod
Jan 28 01:23:56.822: INFO: Waiting for pod pod-subpath-test-secret-tckl to disappear
Jan 28 01:23:56.831: INFO: Pod pod-subpath-test-secret-tckl no longer exists
STEP: Deleting pod pod-subpath-test-secret-tckl
Jan 28 01:23:56.831: INFO: Deleting pod "pod-subpath-test-secret-tckl" in namespace "subpath-3640"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:56.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3640" for this suite.

• [SLOW TEST:31.050 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":265,"skipped":4325,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:56.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 28 01:23:57.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 28 01:23:57.212: INFO: stderr: ""
Jan 28 01:23:57.212: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:57.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1000" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":266,"skipped":4338,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:57.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Jan 28 01:23:57.334: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 28 01:23:57.348: INFO: Waiting for terminating namespaces to be deleted...
Jan 28 01:23:57.351: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 28 01:23:57.364: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.364: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 01:23:57.364: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 28 01:23:57.364: INFO: 	Container weave ready: true, restart count 1
Jan 28 01:23:57.364: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 01:23:57.364: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 28 01:23:57.396: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container coredns ready: true, restart count 0
Jan 28 01:23:57.396: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container coredns ready: true, restart count 0
Jan 28 01:23:57.396: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 28 01:23:57.396: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 28 01:23:57.396: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container weave ready: true, restart count 0
Jan 28 01:23:57.396: INFO: 	Container weave-npc ready: true, restart count 0
Jan 28 01:23:57.396: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container kube-scheduler ready: true, restart count 4
Jan 28 01:23:57.396: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 28 01:23:57.396: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 28 01:23:57.396: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ede875f1f4579a], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ede875f562507b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:23:58.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-2910" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":280,"completed":267,"skipped":4339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:23:58.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 28 01:23:59.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 28 01:24:01.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:24:03.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 28 01:24:05.640: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715771439, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 28 01:24:08.713: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:24:08.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8145" for this suite.
STEP: Destroying namespace "webhook-8145-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.519 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":268,"skipped":4361,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:24:09.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 28 01:24:09.322: INFO: Number of nodes with available pods: 0
Jan 28 01:24:09.322: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:11.301: INFO: Number of nodes with available pods: 0
Jan 28 01:24:11.301: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:11.378: INFO: Number of nodes with available pods: 0
Jan 28 01:24:11.378: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:12.331: INFO: Number of nodes with available pods: 0
Jan 28 01:24:12.331: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:13.335: INFO: Number of nodes with available pods: 0
Jan 28 01:24:13.335: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:14.764: INFO: Number of nodes with available pods: 0
Jan 28 01:24:14.764: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:15.914: INFO: Number of nodes with available pods: 0
Jan 28 01:24:15.914: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:16.363: INFO: Number of nodes with available pods: 0
Jan 28 01:24:16.363: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:17.427: INFO: Number of nodes with available pods: 0
Jan 28 01:24:17.427: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:18.398: INFO: Number of nodes with available pods: 0
Jan 28 01:24:18.398: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:19.334: INFO: Number of nodes with available pods: 0
Jan 28 01:24:19.334: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:20.334: INFO: Number of nodes with available pods: 1
Jan 28 01:24:20.334: INFO: Node jerma-node is running more than one daemon pod
Jan 28 01:24:21.343: INFO: Number of nodes with available pods: 2
Jan 28 01:24:21.343: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 28 01:24:21.418: INFO: Number of nodes with available pods: 2
Jan 28 01:24:21.418: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5071, will wait for the garbage collector to delete the pods
Jan 28 01:24:23.309: INFO: Deleting DaemonSet.extensions daemon-set took: 641.148261ms
Jan 28 01:24:23.410: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.35568ms
Jan 28 01:24:31.247: INFO: Number of nodes with available pods: 0
Jan 28 01:24:31.247: INFO: Number of running nodes: 0, number of available pods: 0
Jan 28 01:24:31.251: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5071/daemonsets","resourceVersion":"4793272"},"items":null}

Jan 28 01:24:31.260: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5071/pods","resourceVersion":"4793273"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:24:31.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5071" for this suite.

• [SLOW TEST:22.196 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":269,"skipped":4372,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:24:31.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-48609218-438a-4ce7-a798-68a5b867a026
STEP: Creating a pod to test consume secrets
Jan 28 01:24:31.454: INFO: Waiting up to 5m0s for pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b" in namespace "secrets-7926" to be "success or failure"
Jan 28 01:24:31.465: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.2929ms
Jan 28 01:24:33.471: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017472173s
Jan 28 01:24:35.514: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059628088s
Jan 28 01:24:37.518: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064024263s
Jan 28 01:24:39.522: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068316905s
STEP: Saw pod success
Jan 28 01:24:39.522: INFO: Pod "pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b" satisfied condition "success or failure"
Jan 28 01:24:39.525: INFO: Trying to get logs from node jerma-node pod pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b container secret-volume-test: 
STEP: delete the pod
Jan 28 01:24:39.554: INFO: Waiting for pod pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b to disappear
Jan 28 01:24:39.615: INFO: Pod pod-secrets-3bbaadff-fc84-40df-80c7-ad109302033b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:24:39.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7926" for this suite.

• [SLOW TEST:8.379 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":270,"skipped":4387,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:24:39.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:24:56.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3423" for this suite.

• [SLOW TEST:16.486 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":271,"skipped":4394,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:24:56.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 28 01:24:56.259: INFO: namespace kubectl-3894
Jan 28 01:24:56.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3894'
Jan 28 01:24:56.801: INFO: stderr: ""
Jan 28 01:24:56.801: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 28 01:24:57.813: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:24:57.813: INFO: Found 0 / 1
Jan 28 01:24:58.810: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:24:58.810: INFO: Found 0 / 1
Jan 28 01:24:59.806: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:24:59.806: INFO: Found 0 / 1
Jan 28 01:25:00.809: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:00.809: INFO: Found 0 / 1
Jan 28 01:25:02.260: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:02.260: INFO: Found 0 / 1
Jan 28 01:25:02.809: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:02.809: INFO: Found 0 / 1
Jan 28 01:25:03.811: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:03.812: INFO: Found 0 / 1
Jan 28 01:25:04.809: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:04.809: INFO: Found 1 / 1
Jan 28 01:25:04.809: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 28 01:25:04.815: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 28 01:25:04.815: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 28 01:25:04.815: INFO: wait on agnhost-master startup in kubectl-3894 
Jan 28 01:25:04.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-b4qrn agnhost-master --namespace=kubectl-3894'
Jan 28 01:25:05.067: INFO: stderr: ""
Jan 28 01:25:05.067: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan 28 01:25:05.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3894'
Jan 28 01:25:05.297: INFO: stderr: ""
Jan 28 01:25:05.297: INFO: stdout: "service/rm2 exposed\n"
Jan 28 01:25:05.319: INFO: Service rm2 in namespace kubectl-3894 found.
STEP: exposing service
Jan 28 01:25:07.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3894'
Jan 28 01:25:07.609: INFO: stderr: ""
Jan 28 01:25:07.609: INFO: stdout: "service/rm3 exposed\n"
Jan 28 01:25:07.669: INFO: Service rm3 in namespace kubectl-3894 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:25:09.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3894" for this suite.

• [SLOW TEST:13.545 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":280,"completed":272,"skipped":4400,"failed":0}
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:25:09.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 28 01:25:25.893: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:25.914: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:27.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:27.932: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:29.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:29.923: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:31.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:31.921: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:33.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:33.928: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:35.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:35.923: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:37.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:37.926: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:39.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:39.923: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:41.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:41.924: INFO: Pod pod-with-prestop-http-hook still exists
Jan 28 01:25:43.914: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 28 01:25:43.922: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:25:43.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2011" for this suite.

• [SLOW TEST:34.274 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4401,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:25:43.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:26:00.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1790" for this suite.

• [SLOW TEST:16.254 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":274,"skipped":4415,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:26:00.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 28 01:26:00.362: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394" in namespace "projected-7244" to be "success or failure"
Jan 28 01:26:00.480: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394": Phase="Pending", Reason="", readiness=false. Elapsed: 117.217694ms
Jan 28 01:26:02.488: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125612785s
Jan 28 01:26:04.496: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133617306s
Jan 28 01:26:06.505: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394": Phase="Pending", Reason="", readiness=false. Elapsed: 6.142163616s
Jan 28 01:26:08.517: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154716207s
STEP: Saw pod success
Jan 28 01:26:08.517: INFO: Pod "downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394" satisfied condition "success or failure"
Jan 28 01:26:08.522: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394 container client-container: 
STEP: delete the pod
Jan 28 01:26:08.631: INFO: Waiting for pod downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394 to disappear
Jan 28 01:26:08.639: INFO: Pod downwardapi-volume-fdeaa33f-8217-42f0-9463-57b5b046c394 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:26:08.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7244" for this suite.

• [SLOW TEST:8.425 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4431,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:26:08.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-6eb07e04-93fe-4484-8ab0-d1f25edc261a
STEP: Creating configMap with name cm-test-opt-upd-de926207-a213-4c0d-8ae1-669a9edacd1d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6eb07e04-93fe-4484-8ab0-d1f25edc261a
STEP: Updating configmap cm-test-opt-upd-de926207-a213-4c0d-8ae1-669a9edacd1d
STEP: Creating configMap with name cm-test-opt-create-b3e02f28-1f4e-407b-afb7-882a844df136
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:26:25.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9709" for this suite.

• [SLOW TEST:16.699 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4475,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:26:25.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-m42g
STEP: Creating a pod to test atomic-volume-subpath
Jan 28 01:26:25.469: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m42g" in namespace "subpath-6430" to be "success or failure"
Jan 28 01:26:25.475: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.87735ms
Jan 28 01:26:27.483: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013616686s
Jan 28 01:26:29.510: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04071784s
Jan 28 01:26:31.519: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049530327s
Jan 28 01:26:33.540: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 8.071119279s
Jan 28 01:26:35.547: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 10.077942106s
Jan 28 01:26:37.552: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 12.082882033s
Jan 28 01:26:39.559: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 14.08981084s
Jan 28 01:26:41.567: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 16.097564251s
Jan 28 01:26:43.573: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 18.103837358s
Jan 28 01:26:45.673: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 20.204244664s
Jan 28 01:26:47.680: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 22.210520668s
Jan 28 01:26:49.708: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 24.238871057s
Jan 28 01:26:51.715: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Running", Reason="", readiness=true. Elapsed: 26.245720262s
Jan 28 01:26:53.723: INFO: Pod "pod-subpath-test-configmap-m42g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.253788247s
STEP: Saw pod success
Jan 28 01:26:53.723: INFO: Pod "pod-subpath-test-configmap-m42g" satisfied condition "success or failure"
Jan 28 01:26:53.727: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-m42g container test-container-subpath-configmap-m42g: 
STEP: delete the pod
Jan 28 01:26:53.900: INFO: Waiting for pod pod-subpath-test-configmap-m42g to disappear
Jan 28 01:26:53.947: INFO: Pod pod-subpath-test-configmap-m42g no longer exists
STEP: Deleting pod pod-subpath-test-configmap-m42g
Jan 28 01:26:53.947: INFO: Deleting pod "pod-subpath-test-configmap-m42g" in namespace "subpath-6430"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:26:53.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6430" for this suite.

• [SLOW TEST:28.721 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":277,"skipped":4511,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:26:54.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan 28 01:26:54.320: INFO: Waiting up to 5m0s for pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8" in namespace "emptydir-762" to be "success or failure"
Jan 28 01:26:54.355: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.165724ms
Jan 28 01:26:56.366: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04559393s
Jan 28 01:26:58.408: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087745493s
Jan 28 01:27:00.441: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120774258s
Jan 28 01:27:02.458: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.137888317s
Jan 28 01:27:04.468: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14782027s
STEP: Saw pod success
Jan 28 01:27:04.468: INFO: Pod "pod-d212408e-67eb-4ae7-877d-4b22dddd0de8" satisfied condition "success or failure"
Jan 28 01:27:04.472: INFO: Trying to get logs from node jerma-node pod pod-d212408e-67eb-4ae7-877d-4b22dddd0de8 container test-container: 
STEP: delete the pod
Jan 28 01:27:04.533: INFO: Waiting for pod pod-d212408e-67eb-4ae7-877d-4b22dddd0de8 to disappear
Jan 28 01:27:04.602: INFO: Pod pod-d212408e-67eb-4ae7-877d-4b22dddd0de8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:27:04.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-762" for this suite.

• [SLOW TEST:10.555 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":278,"skipped":4550,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:27:04.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-9ea60ccd-262b-4d4f-b025-b3009573d13c
STEP: Creating a pod to test consume secrets
Jan 28 01:27:04.882: INFO: Waiting up to 5m0s for pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a" in namespace "secrets-7036" to be "success or failure"
Jan 28 01:27:04.913: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.397249ms
Jan 28 01:27:06.925: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043520067s
Jan 28 01:27:08.933: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051036153s
Jan 28 01:27:10.939: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057048138s
Jan 28 01:27:12.946: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06382733s
STEP: Saw pod success
Jan 28 01:27:12.946: INFO: Pod "pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a" satisfied condition "success or failure"
Jan 28 01:27:12.949: INFO: Trying to get logs from node jerma-node pod pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a container secret-volume-test: 
STEP: delete the pod
Jan 28 01:27:13.004: INFO: Waiting for pod pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a to disappear
Jan 28 01:27:13.007: INFO: Pod pod-secrets-a6fa91a9-c446-4c8a-ad28-159affae7e6a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:27:13.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7036" for this suite.
STEP: Destroying namespace "secret-namespace-8167" for this suite.

• [SLOW TEST:8.491 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":279,"skipped":4551,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 28 01:27:13.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 28 01:27:13.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1447'
Jan 28 01:27:13.922: INFO: stderr: ""
Jan 28 01:27:13.922: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 28 01:27:13.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1447'
Jan 28 01:27:14.264: INFO: stderr: ""
Jan 28 01:27:14.264: INFO: stdout: "update-demo-nautilus-flmjz update-demo-nautilus-t8q6z "
Jan 28 01:27:14.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:14.466: INFO: stderr: ""
Jan 28 01:27:14.466: INFO: stdout: ""
Jan 28 01:27:14.466: INFO: update-demo-nautilus-flmjz is created but not running
Jan 28 01:27:19.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1447'
Jan 28 01:27:20.049: INFO: stderr: ""
Jan 28 01:27:20.049: INFO: stdout: "update-demo-nautilus-flmjz update-demo-nautilus-t8q6z "
Jan 28 01:27:20.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:20.877: INFO: stderr: ""
Jan 28 01:27:20.877: INFO: stdout: ""
Jan 28 01:27:20.877: INFO: update-demo-nautilus-flmjz is created but not running
Jan 28 01:27:25.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1447'
Jan 28 01:27:26.091: INFO: stderr: ""
Jan 28 01:27:26.091: INFO: stdout: "update-demo-nautilus-flmjz update-demo-nautilus-t8q6z "
Jan 28 01:27:26.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flmjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:26.201: INFO: stderr: ""
Jan 28 01:27:26.201: INFO: stdout: "true"
Jan 28 01:27:26.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-flmjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:26.339: INFO: stderr: ""
Jan 28 01:27:26.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 01:27:26.339: INFO: validating pod update-demo-nautilus-flmjz
Jan 28 01:27:26.346: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 01:27:26.346: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 01:27:26.346: INFO: update-demo-nautilus-flmjz is verified up and running
Jan 28 01:27:26.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8q6z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:26.448: INFO: stderr: ""
Jan 28 01:27:26.448: INFO: stdout: "true"
Jan 28 01:27:26.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8q6z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1447'
Jan 28 01:27:26.569: INFO: stderr: ""
Jan 28 01:27:26.569: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 28 01:27:26.569: INFO: validating pod update-demo-nautilus-t8q6z
Jan 28 01:27:26.595: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 28 01:27:26.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 28 01:27:26.595: INFO: update-demo-nautilus-t8q6z is verified up and running
STEP: using delete to clean up resources
Jan 28 01:27:26.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1447'
Jan 28 01:27:26.714: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 28 01:27:26.714: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 28 01:27:26.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1447'
Jan 28 01:27:26.841: INFO: stderr: "No resources found in kubectl-1447 namespace.\n"
Jan 28 01:27:26.841: INFO: stdout: ""
Jan 28 01:27:26.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1447 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 01:27:26.953: INFO: stderr: ""
Jan 28 01:27:26.953: INFO: stdout: "update-demo-nautilus-flmjz\nupdate-demo-nautilus-t8q6z\n"
Jan 28 01:27:27.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1447'
Jan 28 01:27:27.705: INFO: stderr: "No resources found in kubectl-1447 namespace.\n"
Jan 28 01:27:27.706: INFO: stdout: ""
Jan 28 01:27:27.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1447 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 28 01:27:28.341: INFO: stderr: ""
Jan 28 01:27:28.341: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 28 01:27:28.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1447" for this suite.

• [SLOW TEST:15.234 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":280,"skipped":4561,"failed":0}
SSSSJan 28 01:27:28.367: INFO: Running AfterSuite actions on all nodes
Jan 28 01:27:28.367: INFO: Running AfterSuite actions on node 1
Jan 28 01:27:28.367: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":280,"skipped":4565,"failed":0}

Ran 280 of 4845 Specs in 6501.459 seconds
SUCCESS! -- 280 Passed | 0 Failed | 0 Pending | 4565 Skipped
PASS